Jan 26 16:58:22 crc systemd[1]: Starting Kubernetes Kubelet... Jan 26 16:58:22 crc restorecon[4713]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 16:58:22 crc restorecon[4713]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 16:58:23 crc restorecon[4713]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 16:58:23 crc restorecon[4713]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 26 16:58:24 crc kubenswrapper[4856]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 16:58:24 crc kubenswrapper[4856]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 26 16:58:24 crc kubenswrapper[4856]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 16:58:24 crc kubenswrapper[4856]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 16:58:24 crc kubenswrapper[4856]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 26 16:58:24 crc kubenswrapper[4856]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.788234 4856 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792696 4856 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792721 4856 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792727 4856 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792733 4856 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792739 4856 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792747 4856 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792755 4856 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792761 4856 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792769 4856 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792777 4856 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792798 4856 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792805 4856 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792812 4856 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792818 4856 feature_gate.go:330] unrecognized feature gate: Example Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792824 4856 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792830 4856 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792835 4856 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792840 4856 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792846 4856 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792851 4856 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792857 4856 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792862 4856 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792869 4856 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792876 4856 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792882 4856 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792888 4856 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792895 4856 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792903 4856 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792910 4856 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792917 4856 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792926 4856 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792934 4856 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792941 4856 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792947 4856 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792953 4856 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792960 4856 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792966 4856 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792972 4856 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792978 4856 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792983 4856 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792989 4856 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.792994 4856 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.793000 4856 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.793006 4856 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.793011 4856 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.793017 4856 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.793024 4856 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.793030 4856 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.793035 4856 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.793041 4856 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.793047 4856 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.793053 4856 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.793059 4856 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.793065 4856 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.793071 4856 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.793078 4856 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.793083 4856 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.793090 4856 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.793096 4856 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.793102 4856 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.793107 4856 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.793112 4856 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.793118 4856 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.793123 4856 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.793129 4856 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.793134 4856 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.793140 4856 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.793145 4856 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.793151 4856 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.793157 4856 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.793162 4856 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793295 4856 flags.go:64] FLAG: --address="0.0.0.0" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793309 4856 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793322 4856 flags.go:64] FLAG: --anonymous-auth="true" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793331 4856 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793340 4856 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793348 4856 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793358 4856 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793367 4856 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793373 4856 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793380 4856 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793387 4856 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793395 4856 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793402 4856 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793408 4856 flags.go:64] FLAG: --cgroup-root="" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793414 4856 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793419 4856 flags.go:64] FLAG: --client-ca-file="" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793425 4856 flags.go:64] FLAG: --cloud-config="" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793431 4856 flags.go:64] FLAG: --cloud-provider="" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793437 4856 flags.go:64] FLAG: --cluster-dns="[]" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793446 4856 flags.go:64] FLAG: --cluster-domain="" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793452 4856 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793458 4856 flags.go:64] FLAG: --config-dir="" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793463 4856 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793470 4856 flags.go:64] FLAG: --container-log-max-files="5" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793479 4856 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793486 4856 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793492 4856 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793499 4856 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793505 4856 flags.go:64] FLAG: --contention-profiling="false" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793511 4856 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793516 4856 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793543 4856 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793551 4856 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793559 4856 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793566 4856 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793572 4856 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793579 4856 flags.go:64] FLAG: --enable-load-reader="false" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793585 4856 flags.go:64] FLAG: --enable-server="true" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793591 4856 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793601 4856 flags.go:64] FLAG: --event-burst="100" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793608 4856 flags.go:64] FLAG: --event-qps="50" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793617 4856 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793624 4856 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793631 4856 flags.go:64] FLAG: --eviction-hard="" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793639 4856 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793645 4856 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793650 4856 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793657 4856 flags.go:64] FLAG: --eviction-soft="" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793663 4856 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793672 4856 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793679 4856 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793685 4856 flags.go:64] FLAG: --experimental-mounter-path="" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793691 4856 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793698 4856 flags.go:64] FLAG: --fail-swap-on="true" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793705 4856 flags.go:64] FLAG: --feature-gates="" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793714 4856 flags.go:64] FLAG: --file-check-frequency="20s" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793721 4856 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793727 4856 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793734 4856 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793741 4856 flags.go:64] FLAG: --healthz-port="10248" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793748 4856 flags.go:64] FLAG: --help="false" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793754 4856 flags.go:64] FLAG: --hostname-override="" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793760 4856 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793767 4856 flags.go:64] FLAG: --http-check-frequency="20s" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793773 4856 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793779 4856 flags.go:64] FLAG: --image-credential-provider-config="" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793785 4856 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793791 4856 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793798 4856 flags.go:64] FLAG: --image-service-endpoint="" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793805 4856 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793812 4856 flags.go:64] FLAG: --kube-api-burst="100" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793818 4856 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793825 4856 flags.go:64] FLAG: --kube-api-qps="50" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793832 4856 flags.go:64] FLAG: --kube-reserved="" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793838 4856 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793844 4856 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793851 4856 flags.go:64] FLAG: --kubelet-cgroups="" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793857 4856 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793863 4856 flags.go:64] FLAG: --lock-file="" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793869 4856 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793876 4856 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793883 4856 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793894 4856 flags.go:64] FLAG: --log-json-split-stream="false" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793902 4856 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793928 4856 flags.go:64] FLAG: --log-text-split-stream="false" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793935 4856 flags.go:64] FLAG: --logging-format="text" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793941 4856 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793949 4856 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793956 4856 flags.go:64] FLAG: --manifest-url="" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793962 4856 flags.go:64] FLAG: --manifest-url-header="" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793970 4856 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793977 4856 flags.go:64] FLAG: --max-open-files="1000000" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793986 4856 flags.go:64] FLAG: --max-pods="110" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793992 4856 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.793999 4856 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794005 4856 flags.go:64] FLAG: --memory-manager-policy="None" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794012 4856 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794018 4856 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794024 4856 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794031 4856 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794065 4856 flags.go:64] FLAG: --node-status-max-images="50" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794073 4856 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794080 4856 flags.go:64] FLAG: --oom-score-adj="-999" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794087 4856 flags.go:64] FLAG: --pod-cidr="" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794093 4856 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794104 4856 flags.go:64] FLAG: --pod-manifest-path="" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794111 4856 flags.go:64] FLAG: --pod-max-pids="-1" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794117 4856 flags.go:64] FLAG: --pods-per-core="0" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794124 4856 flags.go:64] FLAG: --port="10250" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794130 4856 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794136 4856 flags.go:64] FLAG: --provider-id="" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794143 4856 flags.go:64] FLAG: --qos-reserved="" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794149 4856 flags.go:64] FLAG: --read-only-port="10255" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794156 4856 flags.go:64] FLAG: --register-node="true" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794163 4856 flags.go:64] FLAG: --register-schedulable="true" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794170 4856 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794181 4856 flags.go:64] FLAG: --registry-burst="10" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794187 4856 flags.go:64] FLAG: --registry-qps="5" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794194 4856 flags.go:64] FLAG: --reserved-cpus="" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794202 4856 flags.go:64] FLAG: --reserved-memory="" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794210 4856 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794217 4856 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794223 4856 flags.go:64] FLAG: --rotate-certificates="false" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794230 4856 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794236 4856 flags.go:64] FLAG: --runonce="false" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794243 4856 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794249 4856 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794256 4856 flags.go:64] FLAG: --seccomp-default="false" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794262 4856 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794269 4856 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794275 4856 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794282 4856 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794289 4856 flags.go:64] FLAG: --storage-driver-password="root" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794296 4856 flags.go:64] FLAG: --storage-driver-secure="false" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794302 4856 flags.go:64] FLAG: --storage-driver-table="stats" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794308 4856 flags.go:64] FLAG: --storage-driver-user="root" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794314 4856 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794321 4856 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794327 4856 flags.go:64] FLAG: --system-cgroups="" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794334 4856 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794343 4856 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794350 4856 flags.go:64] FLAG: --tls-cert-file="" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794356 4856 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794364 4856 flags.go:64] FLAG: --tls-min-version="" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794371 4856 flags.go:64] FLAG: --tls-private-key-file="" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794377 4856 flags.go:64] FLAG: --topology-manager-policy="none" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794384 4856 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794391 4856 flags.go:64] FLAG: --topology-manager-scope="container" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794397 4856 flags.go:64] FLAG: --v="2" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794407 4856 flags.go:64] FLAG: --version="false" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794415 4856 flags.go:64] FLAG: --vmodule="" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794423 4856 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 26 16:58:24 crc kubenswrapper[4856]: I0126 16:58:24.794430 4856 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.794613 4856 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.794624 4856 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.794631 4856 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.794637 4856 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.794643 4856 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.794649 4856 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.794655 4856 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.794660 4856 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.794666 4856 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 16:58:24 crc kubenswrapper[4856]: W0126 16:58:24.794672 4856 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794678 4856 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794686 4856 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794692 4856 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794698 4856 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794703 4856 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794709 4856 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794715 4856 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794720 4856 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794726 4856 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794731 4856 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794736 4856 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794742 4856 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794750 4856 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794756 4856 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794761 4856 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794769 4856 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794775 4856 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794780 4856 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794786 4856 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794792 4856 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794798 4856 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794804 4856 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794809 4856 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794817 4856 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794824 4856 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794830 4856 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794836 4856 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794842 4856 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794865 4856 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794871 4856 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794877 4856 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794883 4856 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794889 4856 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794895 4856 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794901 4856 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794907 4856 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794912 4856 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794918 4856 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794924 4856 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794929 4856 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794935 4856 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794941 4856 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794946 4856 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794952 4856 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794959 4856 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794965 4856 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794971 4856 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794977 4856 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794983 4856 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794990 4856 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.794997 4856 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.795004 4856 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.795012 4856 feature_gate.go:330] unrecognized feature gate: Example Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.795019 4856 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.795026 4856 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.795033 4856 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.795040 4856 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.795048 4856 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.795055 4856 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.795062 4856 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.795068 4856 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:24.795129 4856 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:24.820638 4856 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:24.820682 4856 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820788 4856 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820800 4856 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820805 4856 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820810 4856 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820814 4856 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820818 4856 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820822 4856 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820825 4856 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820829 4856 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820833 4856 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820837 4856 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820840 4856 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820844 4856 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820847 4856 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820853 4856 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820859 4856 feature_gate.go:330] unrecognized feature gate: Example Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820863 4856 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820867 4856 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820871 4856 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820875 4856 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820879 4856 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820882 4856 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820886 4856 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820890 4856 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820894 4856 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820897 4856 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820901 4856 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820905 4856 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820908 4856 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820921 4856 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820924 4856 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820930 4856 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820933 4856 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820937 4856 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820941 4856 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820946 4856 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820951 4856 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820955 4856 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820959 4856 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820962 4856 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820966 4856 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820970 4856 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820974 4856 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820978 4856 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820981 4856 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820985 4856 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820988 4856 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820992 4856 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820996 4856 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.820999 4856 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821003 4856 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821008 4856 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821012 4856 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821016 4856 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821021 4856 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821026 4856 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821030 4856 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821034 4856 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821037 4856 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821041 4856 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821044 4856 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821048 4856 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821052 4856 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821056 4856 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821060 4856 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821064 4856 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821068 4856 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821072 4856 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821076 4856 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821079 4856 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821083 4856 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:24.821090 4856 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821429 4856 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821442 4856 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821446 4856 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821451 4856 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821455 4856 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821459 4856 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821471 4856 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821474 4856 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821478 4856 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821482 4856 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821486 4856 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821489 4856 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821493 4856 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821497 4856 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821502 4856 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821506 4856 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821510 4856 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821514 4856 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821533 4856 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821538 4856 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.821542 4856 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822391 4856 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822397 4856 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822400 4856 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822405 4856 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822411 4856 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822417 4856 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822421 4856 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822425 4856 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822439 4856 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822444 4856 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822454 4856 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822459 4856 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822463 4856 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822467 4856 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822471 4856 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822475 4856 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822478 4856 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822482 4856 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822486 4856 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822491 4856 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822494 4856 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822499 4856 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822507 4856 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822511 4856 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822515 4856 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822520 4856 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822541 4856 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822544 4856 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822548 4856 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822552 4856 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822556 4856 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822559 4856 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822563 4856 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822569 4856 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822573 4856 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822580 4856 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822583 4856 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822587 4856 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822593 4856 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822597 4856 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822601 4856 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822606 4856 feature_gate.go:330] unrecognized feature gate: Example Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822610 4856 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822613 4856 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822618 4856 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822622 4856 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822627 4856 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822634 4856 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822639 4856 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:24.822643 4856 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:24.822651 4856 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:24.823117 4856 server.go:940] "Client rotation is on, will bootstrap in background" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:24.827821 4856 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:24.827931 4856 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:24.930415 4856 server.go:997] "Starting client certificate rotation" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:24.930462 4856 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:24.930888 4856 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-27 07:32:48.777159435 +0000 UTC Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:24.931582 4856 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.020841 4856 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 16:58:25 crc kubenswrapper[4856]: E0126 16:58:25.023224 4856 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.241:6443: connect: connection refused" logger="UnhandledError" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.023703 4856 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.036493 4856 log.go:25] "Validated CRI v1 runtime API" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.120809 4856 log.go:25] "Validated CRI v1 image API" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.123088 4856 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.126765 4856 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-26-16-53-11-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.126825 4856 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.153155 4856 manager.go:217] Machine: {Timestamp:2026-01-26 16:58:25.150906702 +0000 UTC m=+1.104160733 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:ca45d056-99cb-4442-8a44-7e899628ecb2 BootID:17523591-a778-4a97-aeab-8a7a93101850 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:7d:03:d5 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:7d:03:d5 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:71:88:05 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:bf:cf:c1 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:48:35:bb Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:b1:75:8b Speed:-1 Mtu:1496} {Name:eth10 MacAddress:8e:01:86:e1:48:8c Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:ae:c9:5b:3c:86:c8 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.153804 4856 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.154009 4856 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.154678 4856 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.154985 4856 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.155055 4856 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.155410 4856 topology_manager.go:138] "Creating topology manager with none policy" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.155432 4856 container_manager_linux.go:303] "Creating device plugin manager" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.155793 4856 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.155844 4856 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.156265 4856 state_mem.go:36] "Initialized new in-memory state store" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.156431 4856 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.162846 4856 kubelet.go:418] "Attempting to sync node with API server" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.162897 4856 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.162981 4856 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.162999 4856 kubelet.go:324] "Adding apiserver pod source" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.163025 4856 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:25.271211 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Jan 26 16:58:25 crc kubenswrapper[4856]: E0126 16:58:25.271434 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.241:6443: connect: connection refused" logger="UnhandledError" Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:25.272043 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Jan 26 16:58:25 crc kubenswrapper[4856]: E0126 16:58:25.272169 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.241:6443: connect: connection refused" logger="UnhandledError" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.273728 4856 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.274108 4856 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.276757 4856 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.297391 4856 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.297434 4856 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.297444 4856 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.297454 4856 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.297470 4856 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.297481 4856 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.297492 4856 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.297506 4856 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.297518 4856 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.297555 4856 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.297572 4856 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.297583 4856 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.297836 4856 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.298554 4856 server.go:1280] "Started kubelet" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.298756 4856 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.299088 4856 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.299053 4856 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 26 16:58:25 crc systemd[1]: Started Kubernetes Kubelet. Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.300824 4856 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.300834 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.300907 4856 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.301001 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 04:21:28.157087387 +0000 UTC Jan 26 16:58:25 crc kubenswrapper[4856]: E0126 16:58:25.301289 4856 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.304084 4856 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.304112 4856 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.304241 4856 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 26 16:58:25 crc kubenswrapper[4856]: E0126 16:58:25.301988 4856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" interval="200ms" Jan 26 16:58:25 crc kubenswrapper[4856]: E0126 16:58:25.304147 4856 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.241:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e56674e4cefc8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 16:58:25.298468808 +0000 UTC m=+1.251722819,LastTimestamp:2026-01-26 16:58:25.298468808 +0000 UTC m=+1.251722819,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:25.304984 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Jan 26 16:58:25 crc kubenswrapper[4856]: E0126 16:58:25.305054 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.241:6443: connect: connection refused" logger="UnhandledError" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.305431 4856 server.go:460] "Adding debug handlers to kubelet server" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.306930 4856 factory.go:55] Registering systemd factory Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.306972 4856 factory.go:221] Registration of the systemd container factory successfully Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.307478 4856 factory.go:153] Registering CRI-O factory Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.307625 4856 factory.go:221] Registration of the crio container factory successfully Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.307811 4856 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.307994 4856 factory.go:103] Registering Raw factory Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.308130 4856 manager.go:1196] Started watching for new ooms in manager Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.309643 4856 manager.go:319] Starting recovery of all containers Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.319119 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.319252 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.319274 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.319305 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.319324 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.319345 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.319412 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.319451 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.319547 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.319573 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.319600 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.320107 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.320160 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.320193 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.320214 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.320228 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.320243 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.320263 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.320277 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.320294 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.320319 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.320338 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.320361 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.320383 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.320402 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.320420 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.320449 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.320477 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.320504 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.320539 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.320559 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.320575 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.320600 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.320633 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.320671 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.320691 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.320706 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.320721 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.320740 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.320754 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.320771 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.320796 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.320856 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.320875 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.320892 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.320908 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.320925 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.320946 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.320964 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.320978 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.320992 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.321015 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.321073 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.321099 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.321136 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.321170 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.321188 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.321290 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.321304 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.321320 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.321346 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.321361 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.321385 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.321408 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.321422 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.321446 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.321466 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.321491 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.321505 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.321578 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.321598 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323074 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323141 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323160 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323175 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323190 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323204 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323219 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323244 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323257 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323271 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323285 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323301 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323319 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323334 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323347 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323366 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323379 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323392 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323405 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323422 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323435 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323450 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323465 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323478 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323492 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323510 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323543 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323559 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323573 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323586 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323600 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323619 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323649 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323704 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323746 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323775 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323790 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323804 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323817 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323829 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323843 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323871 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323886 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323899 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323933 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323947 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323962 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.323992 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.324007 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.324023 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.324036 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.324049 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.324065 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.324079 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.324091 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.324107 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.324120 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.324133 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.324146 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.324159 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.324181 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.324193 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.324205 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.324220 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.324235 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.324249 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.324263 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.324281 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.324294 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.324313 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.324325 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.324338 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.324351 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.324364 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.324388 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.324469 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325308 4856 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325340 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325355 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325370 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325386 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325401 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325415 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325429 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325445 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325459 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325473 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325487 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325503 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325516 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325555 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325570 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325585 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325599 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325616 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325631 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325645 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325659 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325673 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325687 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325703 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325719 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325733 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325751 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325773 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325791 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325807 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325821 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325836 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325851 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325864 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325878 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325892 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325906 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325921 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325935 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325948 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325961 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325976 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.325992 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.326013 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.326049 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.326067 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.326087 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.326098 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.326113 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.326126 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.326141 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.326152 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.326165 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.326178 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.326191 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.326231 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.326244 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.326256 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.326270 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.326297 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.326314 4856 reconstruct.go:97] "Volume reconstruction finished" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.326324 4856 reconciler.go:26] "Reconciler: start to sync state" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.337045 4856 manager.go:324] Recovery completed Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.347277 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.349074 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.349120 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.349136 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.349975 4856 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.350071 4856 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.350136 4856 state_mem.go:36] "Initialized new in-memory state store" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.377521 4856 policy_none.go:49] "None policy: Start" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.379004 4856 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.379135 4856 state_mem.go:35] "Initializing new in-memory state store" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.391549 4856 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.393770 4856 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.393865 4856 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.393918 4856 kubelet.go:2335] "Starting kubelet main sync loop" Jan 26 16:58:25 crc kubenswrapper[4856]: E0126 16:58:25.393983 4856 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 26 16:58:25 crc kubenswrapper[4856]: W0126 16:58:25.395619 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Jan 26 16:58:25 crc kubenswrapper[4856]: E0126 16:58:25.395808 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.241:6443: connect: connection refused" logger="UnhandledError" Jan 26 16:58:25 crc kubenswrapper[4856]: E0126 16:58:25.402375 4856 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.436095 4856 manager.go:334] "Starting Device Plugin manager" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.436169 4856 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.436215 4856 server.go:79] "Starting device plugin registration server" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.436720 4856 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.436790 4856 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.437022 4856 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.437157 4856 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.437171 4856 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 26 16:58:25 crc kubenswrapper[4856]: E0126 16:58:25.446308 4856 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.494641 4856 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.494860 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.496384 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.496446 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.496460 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.496793 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.497276 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.497365 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.498953 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.499008 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.499019 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.499231 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.500576 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.500646 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.500658 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:25 crc kubenswrapper[4856]: E0126 16:58:25.505551 4856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" interval="400ms" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.537590 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.605472 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.605645 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.605647 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.605728 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.605799 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.607261 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.607299 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.607308 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.608324 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.608449 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.609027 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.609196 4856 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.610113 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.610144 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.610187 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.610410 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.610469 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.610498 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:25 crc kubenswrapper[4856]: E0126 16:58:25.610833 4856 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.241:6443: connect: connection refused" node="crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.611686 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.611758 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.611778 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.612077 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.612486 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.612813 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.615299 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.615322 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.615332 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.615329 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.615371 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.615385 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.615680 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.615714 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.616471 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.616539 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.616561 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.630408 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.630471 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.630510 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.630601 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.630677 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.630813 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.630885 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.630911 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.630933 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.732046 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.732111 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.732137 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.732170 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.732195 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.732219 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.732242 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.732239 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.732263 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.732263 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.732308 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.732341 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.732367 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.732384 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.732402 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.732434 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.732449 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.732477 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.732512 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.732549 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.732574 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.732621 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.732641 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.732675 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.811228 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.812853 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.812906 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.812920 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.812949 4856 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 16:58:25 crc kubenswrapper[4856]: E0126 16:58:25.813397 4856 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.241:6443: connect: connection refused" node="crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.833742 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.833777 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.833875 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.833892 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.833906 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.833913 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.833941 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.833948 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.833957 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.833998 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.833995 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.834061 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: E0126 16:58:25.906204 4856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" interval="800ms" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.945317 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.971714 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.979565 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 16:58:25 crc kubenswrapper[4856]: I0126 16:58:25.996300 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 16:58:26 crc kubenswrapper[4856]: I0126 16:58:26.002815 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 16:58:26 crc kubenswrapper[4856]: W0126 16:58:26.041774 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-f51fb69a746ad12932aa90f0ee7136d67ee5e0191155b290431c9c04aaa872ac WatchSource:0}: Error finding container f51fb69a746ad12932aa90f0ee7136d67ee5e0191155b290431c9c04aaa872ac: Status 404 returned error can't find the container with id f51fb69a746ad12932aa90f0ee7136d67ee5e0191155b290431c9c04aaa872ac Jan 26 16:58:26 crc kubenswrapper[4856]: W0126 16:58:26.043401 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-9cc6e10c1f090b8c0577139a3d454c19f3e8ff4c4ba801b9b1e28208de61af3a WatchSource:0}: Error finding container 9cc6e10c1f090b8c0577139a3d454c19f3e8ff4c4ba801b9b1e28208de61af3a: Status 404 returned error can't find the container with id 9cc6e10c1f090b8c0577139a3d454c19f3e8ff4c4ba801b9b1e28208de61af3a Jan 26 16:58:26 crc kubenswrapper[4856]: W0126 16:58:26.044330 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-f8f29418ea7371f7c2156a6519615d2a7528e035664f5c3fb75908d5c0e95cba WatchSource:0}: Error finding container f8f29418ea7371f7c2156a6519615d2a7528e035664f5c3fb75908d5c0e95cba: Status 404 returned error can't find the container with id f8f29418ea7371f7c2156a6519615d2a7528e035664f5c3fb75908d5c0e95cba Jan 26 16:58:26 crc kubenswrapper[4856]: W0126 16:58:26.046924 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-4365b662a1ef1f25c43b0d9068b29f4b8c92282da9679a062ca15b8955aa46e5 WatchSource:0}: Error finding container 4365b662a1ef1f25c43b0d9068b29f4b8c92282da9679a062ca15b8955aa46e5: Status 404 returned error can't find the container with id 4365b662a1ef1f25c43b0d9068b29f4b8c92282da9679a062ca15b8955aa46e5 Jan 26 16:58:26 crc kubenswrapper[4856]: W0126 16:58:26.048934 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-c1e8e1a2f46385dd8f9419df37b7fc8324611fd06ffbe5e94e52acce3738296e WatchSource:0}: Error finding container c1e8e1a2f46385dd8f9419df37b7fc8324611fd06ffbe5e94e52acce3738296e: Status 404 returned error can't find the container with id c1e8e1a2f46385dd8f9419df37b7fc8324611fd06ffbe5e94e52acce3738296e Jan 26 16:58:26 crc kubenswrapper[4856]: I0126 16:58:26.213543 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:26 crc kubenswrapper[4856]: I0126 16:58:26.214424 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:26 crc kubenswrapper[4856]: I0126 16:58:26.214468 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:26 crc kubenswrapper[4856]: I0126 16:58:26.214482 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:26 crc kubenswrapper[4856]: I0126 16:58:26.214506 4856 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 16:58:26 crc kubenswrapper[4856]: E0126 16:58:26.215066 4856 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.241:6443: connect: connection refused" node="crc" Jan 26 16:58:26 crc kubenswrapper[4856]: I0126 16:58:26.300517 4856 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Jan 26 16:58:26 crc kubenswrapper[4856]: I0126 16:58:26.301394 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 04:03:19.37964943 +0000 UTC Jan 26 16:58:26 crc kubenswrapper[4856]: W0126 16:58:26.327582 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Jan 26 16:58:26 crc kubenswrapper[4856]: E0126 16:58:26.327703 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.241:6443: connect: connection refused" logger="UnhandledError" Jan 26 16:58:26 crc kubenswrapper[4856]: I0126 16:58:26.401558 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f8f29418ea7371f7c2156a6519615d2a7528e035664f5c3fb75908d5c0e95cba"} Jan 26 16:58:26 crc kubenswrapper[4856]: I0126 16:58:26.402376 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c1e8e1a2f46385dd8f9419df37b7fc8324611fd06ffbe5e94e52acce3738296e"} Jan 26 16:58:26 crc kubenswrapper[4856]: I0126 16:58:26.403125 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4365b662a1ef1f25c43b0d9068b29f4b8c92282da9679a062ca15b8955aa46e5"} Jan 26 16:58:26 crc kubenswrapper[4856]: I0126 16:58:26.404085 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f51fb69a746ad12932aa90f0ee7136d67ee5e0191155b290431c9c04aaa872ac"} Jan 26 16:58:26 crc kubenswrapper[4856]: I0126 16:58:26.405305 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"9cc6e10c1f090b8c0577139a3d454c19f3e8ff4c4ba801b9b1e28208de61af3a"} Jan 26 16:58:26 crc kubenswrapper[4856]: W0126 16:58:26.519911 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Jan 26 16:58:26 crc kubenswrapper[4856]: E0126 16:58:26.520028 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.241:6443: connect: connection refused" logger="UnhandledError" Jan 26 16:58:26 crc kubenswrapper[4856]: W0126 16:58:26.580410 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Jan 26 16:58:26 crc kubenswrapper[4856]: E0126 16:58:26.580748 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.241:6443: connect: connection refused" logger="UnhandledError" Jan 26 16:58:26 crc kubenswrapper[4856]: E0126 16:58:26.707093 4856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" interval="1.6s" Jan 26 16:58:26 crc kubenswrapper[4856]: W0126 16:58:26.782399 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Jan 26 16:58:26 crc kubenswrapper[4856]: E0126 16:58:26.782518 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.241:6443: connect: connection refused" logger="UnhandledError" Jan 26 16:58:26 crc kubenswrapper[4856]: E0126 16:58:26.789871 4856 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.241:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e56674e4cefc8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 16:58:25.298468808 +0000 UTC m=+1.251722819,LastTimestamp:2026-01-26 16:58:25.298468808 +0000 UTC m=+1.251722819,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 16:58:27 crc kubenswrapper[4856]: I0126 16:58:27.016411 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:27 crc kubenswrapper[4856]: I0126 16:58:27.018507 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:27 crc kubenswrapper[4856]: I0126 16:58:27.018580 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:27 crc kubenswrapper[4856]: I0126 16:58:27.018603 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:27 crc kubenswrapper[4856]: I0126 16:58:27.018650 4856 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 16:58:27 crc kubenswrapper[4856]: E0126 16:58:27.019360 4856 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.241:6443: connect: connection refused" node="crc" Jan 26 16:58:27 crc kubenswrapper[4856]: I0126 16:58:27.104657 4856 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 16:58:27 crc kubenswrapper[4856]: E0126 16:58:27.105785 4856 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.241:6443: connect: connection refused" logger="UnhandledError" Jan 26 16:58:27 crc kubenswrapper[4856]: I0126 16:58:27.300578 4856 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Jan 26 16:58:27 crc kubenswrapper[4856]: I0126 16:58:27.302507 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 08:14:22.30029563 +0000 UTC Jan 26 16:58:28 crc kubenswrapper[4856]: W0126 16:58:28.213278 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Jan 26 16:58:28 crc kubenswrapper[4856]: E0126 16:58:28.213655 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.241:6443: connect: connection refused" logger="UnhandledError" Jan 26 16:58:28 crc kubenswrapper[4856]: I0126 16:58:28.300475 4856 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Jan 26 16:58:28 crc kubenswrapper[4856]: I0126 16:58:28.303586 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 05:54:41.690373672 +0000 UTC Jan 26 16:58:28 crc kubenswrapper[4856]: E0126 16:58:28.308336 4856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" interval="3.2s" Jan 26 16:58:28 crc kubenswrapper[4856]: I0126 16:58:28.470948 4856 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="f67797295a8c3952902b2696c6fdb26b72ce1826b5ccd522a24aac90a0411b5f" exitCode=0 Jan 26 16:58:28 crc kubenswrapper[4856]: I0126 16:58:28.471041 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:28 crc kubenswrapper[4856]: I0126 16:58:28.471053 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"f67797295a8c3952902b2696c6fdb26b72ce1826b5ccd522a24aac90a0411b5f"} Jan 26 16:58:28 crc kubenswrapper[4856]: I0126 16:58:28.472704 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:28 crc kubenswrapper[4856]: I0126 16:58:28.472855 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:28 crc kubenswrapper[4856]: I0126 16:58:28.472870 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:28 crc kubenswrapper[4856]: I0126 16:58:28.475296 4856 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="bf39fbfd0b23f9b34e42610ae3603d849bcf4211f53ba47cbbebdaf47a9687d8" exitCode=0 Jan 26 16:58:28 crc kubenswrapper[4856]: I0126 16:58:28.475346 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"bf39fbfd0b23f9b34e42610ae3603d849bcf4211f53ba47cbbebdaf47a9687d8"} Jan 26 16:58:28 crc kubenswrapper[4856]: I0126 16:58:28.475516 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:28 crc kubenswrapper[4856]: I0126 16:58:28.476833 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:28 crc kubenswrapper[4856]: I0126 16:58:28.476867 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:28 crc kubenswrapper[4856]: I0126 16:58:28.476878 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:28 crc kubenswrapper[4856]: I0126 16:58:28.478107 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7a00494ca589263eb0f50c879c0aa1e1c263f74e302325f88eee31b220ebf53b"} Jan 26 16:58:28 crc kubenswrapper[4856]: I0126 16:58:28.478141 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fb3c5348b8b83991cbb42255dc07d74fe50e200793efe1a7b2b2727a5c2be800"} Jan 26 16:58:28 crc kubenswrapper[4856]: I0126 16:58:28.481152 4856 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462" exitCode=0 Jan 26 16:58:28 crc kubenswrapper[4856]: I0126 16:58:28.481214 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462"} Jan 26 16:58:28 crc kubenswrapper[4856]: I0126 16:58:28.481286 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:28 crc kubenswrapper[4856]: I0126 16:58:28.482238 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:28 crc kubenswrapper[4856]: I0126 16:58:28.482275 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:28 crc kubenswrapper[4856]: I0126 16:58:28.482284 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:28 crc kubenswrapper[4856]: I0126 16:58:28.484120 4856 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="cb95f623fa4f5f42217649ccce4225f9fe588bb3558da9334eef2b40ddb62486" exitCode=0 Jan 26 16:58:28 crc kubenswrapper[4856]: I0126 16:58:28.484161 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"cb95f623fa4f5f42217649ccce4225f9fe588bb3558da9334eef2b40ddb62486"} Jan 26 16:58:28 crc kubenswrapper[4856]: I0126 16:58:28.484217 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:28 crc kubenswrapper[4856]: I0126 16:58:28.484969 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:28 crc kubenswrapper[4856]: I0126 16:58:28.484995 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:28 crc kubenswrapper[4856]: I0126 16:58:28.485004 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:28 crc kubenswrapper[4856]: W0126 16:58:28.504485 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Jan 26 16:58:28 crc kubenswrapper[4856]: E0126 16:58:28.504606 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.241:6443: connect: connection refused" logger="UnhandledError" Jan 26 16:58:28 crc kubenswrapper[4856]: I0126 16:58:28.560800 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:28 crc kubenswrapper[4856]: I0126 16:58:28.561876 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:28 crc kubenswrapper[4856]: I0126 16:58:28.561904 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:28 crc kubenswrapper[4856]: I0126 16:58:28.561912 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:28 crc kubenswrapper[4856]: I0126 16:58:28.619900 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:28 crc kubenswrapper[4856]: I0126 16:58:28.621879 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:28 crc kubenswrapper[4856]: I0126 16:58:28.622103 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:28 crc kubenswrapper[4856]: I0126 16:58:28.622117 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:28 crc kubenswrapper[4856]: I0126 16:58:28.622153 4856 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 16:58:28 crc kubenswrapper[4856]: E0126 16:58:28.622904 4856 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.241:6443: connect: connection refused" node="crc" Jan 26 16:58:28 crc kubenswrapper[4856]: W0126 16:58:28.833625 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Jan 26 16:58:28 crc kubenswrapper[4856]: E0126 16:58:28.833697 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.241:6443: connect: connection refused" logger="UnhandledError" Jan 26 16:58:29 crc kubenswrapper[4856]: I0126 16:58:29.300283 4856 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Jan 26 16:58:29 crc kubenswrapper[4856]: I0126 16:58:29.345936 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 10:57:12.498075796 +0000 UTC Jan 26 16:58:29 crc kubenswrapper[4856]: W0126 16:58:29.483145 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Jan 26 16:58:29 crc kubenswrapper[4856]: E0126 16:58:29.483295 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.241:6443: connect: connection refused" logger="UnhandledError" Jan 26 16:58:29 crc kubenswrapper[4856]: I0126 16:58:29.489596 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8cbde934a6c8acad10ca3ab8206d0ddbd4f7b17e9d304b898a68f4d3b0303bb4"} Jan 26 16:58:29 crc kubenswrapper[4856]: I0126 16:58:29.491853 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"ec9063a7c03990fc26fc47427f164a769fd649c2bdbd9d23ea7f646e569734be"} Jan 26 16:58:29 crc kubenswrapper[4856]: I0126 16:58:29.491928 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:29 crc kubenswrapper[4856]: I0126 16:58:29.493255 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:29 crc kubenswrapper[4856]: I0126 16:58:29.493316 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:29 crc kubenswrapper[4856]: I0126 16:58:29.493335 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:29 crc kubenswrapper[4856]: I0126 16:58:29.494209 4856 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a2ca7ee60b82663fdc02dc2dd3f7af379df8407800d04c57f4f4d09d49ed9aa0" exitCode=0 Jan 26 16:58:29 crc kubenswrapper[4856]: I0126 16:58:29.494305 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a2ca7ee60b82663fdc02dc2dd3f7af379df8407800d04c57f4f4d09d49ed9aa0"} Jan 26 16:58:29 crc kubenswrapper[4856]: I0126 16:58:29.494467 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:29 crc kubenswrapper[4856]: I0126 16:58:29.495690 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:29 crc kubenswrapper[4856]: I0126 16:58:29.495736 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:29 crc kubenswrapper[4856]: I0126 16:58:29.495747 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:29 crc kubenswrapper[4856]: I0126 16:58:29.499033 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0a03e2fad94ce4122f1d77ce30dc80bb78298396649c12b885c386e5f8eea50b"} Jan 26 16:58:29 crc kubenswrapper[4856]: I0126 16:58:29.499086 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7c3027fabe8a104141386b9767218f38a143318580dd2a33448fed2c05688ba1"} Jan 26 16:58:29 crc kubenswrapper[4856]: I0126 16:58:29.499131 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:29 crc kubenswrapper[4856]: I0126 16:58:29.500570 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:29 crc kubenswrapper[4856]: I0126 16:58:29.500609 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:29 crc kubenswrapper[4856]: I0126 16:58:29.500624 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:29 crc kubenswrapper[4856]: I0126 16:58:29.501341 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"89975b8f9428f81ab5d3fb48ced5dd9c837bea2feea3b89f5f7ff8d7d5d15b3e"} Jan 26 16:58:30 crc kubenswrapper[4856]: I0126 16:58:30.300005 4856 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Jan 26 16:58:30 crc kubenswrapper[4856]: I0126 16:58:30.347036 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 05:01:20.71257534 +0000 UTC Jan 26 16:58:30 crc kubenswrapper[4856]: I0126 16:58:30.562437 4856 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ee2a878cbd2cdef8fe8d9bb62a4554ffb8aeadfb90ab92b4ff6ec965824ec37a" exitCode=0 Jan 26 16:58:30 crc kubenswrapper[4856]: I0126 16:58:30.562541 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ee2a878cbd2cdef8fe8d9bb62a4554ffb8aeadfb90ab92b4ff6ec965824ec37a"} Jan 26 16:58:30 crc kubenswrapper[4856]: I0126 16:58:30.562713 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:30 crc kubenswrapper[4856]: I0126 16:58:30.563833 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:30 crc kubenswrapper[4856]: I0126 16:58:30.563871 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:30 crc kubenswrapper[4856]: I0126 16:58:30.563883 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:30 crc kubenswrapper[4856]: I0126 16:58:30.569646 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f2daa3f13d37b7ae19dfca406b6b5cfdcd4f211287f7d410193f2fee36a24553"} Jan 26 16:58:30 crc kubenswrapper[4856]: I0126 16:58:30.569734 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"40d432159a07ba20bf95f058bf8a597d67cbd8d852519bed035694f8ba3d8ec4"} Jan 26 16:58:30 crc kubenswrapper[4856]: I0126 16:58:30.569768 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"59ba0f131eee14df389391e46bc772894e49e976c3663df5fb3bc98b3cb4a3d6"} Jan 26 16:58:30 crc kubenswrapper[4856]: I0126 16:58:30.574918 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:30 crc kubenswrapper[4856]: I0126 16:58:30.575058 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c0094e662f53c4832a984e05a880021af05ffc4c27f25394c28a070d9ef5490d"} Jan 26 16:58:30 crc kubenswrapper[4856]: I0126 16:58:30.575131 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b48f763d4aff37169399be766d5ab4f7ebbf91f304d139c9022a8556946eb107"} Jan 26 16:58:30 crc kubenswrapper[4856]: I0126 16:58:30.575223 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:30 crc kubenswrapper[4856]: I0126 16:58:30.575264 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:30 crc kubenswrapper[4856]: I0126 16:58:30.577313 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:30 crc kubenswrapper[4856]: I0126 16:58:30.577381 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:30 crc kubenswrapper[4856]: I0126 16:58:30.577400 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:30 crc kubenswrapper[4856]: I0126 16:58:30.578586 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:30 crc kubenswrapper[4856]: I0126 16:58:30.578646 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:30 crc kubenswrapper[4856]: I0126 16:58:30.578665 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:30 crc kubenswrapper[4856]: I0126 16:58:30.580291 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:30 crc kubenswrapper[4856]: I0126 16:58:30.580336 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:30 crc kubenswrapper[4856]: I0126 16:58:30.582141 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:30 crc kubenswrapper[4856]: I0126 16:58:30.583619 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 16:58:31 crc kubenswrapper[4856]: I0126 16:58:31.300058 4856 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Jan 26 16:58:31 crc kubenswrapper[4856]: I0126 16:58:31.348182 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 01:10:46.430794525 +0000 UTC Jan 26 16:58:31 crc kubenswrapper[4856]: I0126 16:58:31.434681 4856 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 16:58:31 crc kubenswrapper[4856]: E0126 16:58:31.436776 4856 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.241:6443: connect: connection refused" logger="UnhandledError" Jan 26 16:58:31 crc kubenswrapper[4856]: E0126 16:58:31.568006 4856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" interval="6.4s" Jan 26 16:58:31 crc kubenswrapper[4856]: I0126 16:58:31.582837 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c387234ad8d7123da333d3de4a80f3a79c25dddf0c3a0fb004b521161ff105b4"} Jan 26 16:58:31 crc kubenswrapper[4856]: I0126 16:58:31.582936 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"93c814433ba35046d47c29524f19b728793436e9f6967a6ea7249e35f673f48a"} Jan 26 16:58:31 crc kubenswrapper[4856]: I0126 16:58:31.586861 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9fc5bd8ccf4d2f104d1ef654e18a5851e0cd141cea5247a692b0bdf92c390b4f"} Jan 26 16:58:31 crc kubenswrapper[4856]: I0126 16:58:31.587059 4856 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 16:58:31 crc kubenswrapper[4856]: I0126 16:58:31.587173 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:31 crc kubenswrapper[4856]: I0126 16:58:31.587174 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:31 crc kubenswrapper[4856]: I0126 16:58:31.592334 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:31 crc kubenswrapper[4856]: I0126 16:58:31.593256 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:31 crc kubenswrapper[4856]: I0126 16:58:31.593221 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:31 crc kubenswrapper[4856]: I0126 16:58:31.593421 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:31 crc kubenswrapper[4856]: I0126 16:58:31.593447 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:31 crc kubenswrapper[4856]: I0126 16:58:31.593629 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:31 crc kubenswrapper[4856]: I0126 16:58:31.593711 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:31 crc kubenswrapper[4856]: I0126 16:58:31.595051 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:31 crc kubenswrapper[4856]: I0126 16:58:31.595127 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:31 crc kubenswrapper[4856]: I0126 16:58:31.595146 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:31 crc kubenswrapper[4856]: I0126 16:58:31.823265 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:31 crc kubenswrapper[4856]: I0126 16:58:31.824789 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:31 crc kubenswrapper[4856]: I0126 16:58:31.824830 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:31 crc kubenswrapper[4856]: I0126 16:58:31.824841 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:31 crc kubenswrapper[4856]: I0126 16:58:31.824866 4856 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 16:58:31 crc kubenswrapper[4856]: E0126 16:58:31.825409 4856 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.241:6443: connect: connection refused" node="crc" Jan 26 16:58:32 crc kubenswrapper[4856]: I0126 16:58:32.348600 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 16:17:30.985401115 +0000 UTC Jan 26 16:58:32 crc kubenswrapper[4856]: I0126 16:58:32.595126 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8c687b137e2bdfb70b19588ae8f5c65a23c2df57716cfd6918856236f2d6610a"} Jan 26 16:58:32 crc kubenswrapper[4856]: I0126 16:58:32.595178 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ff371891a210c6f3498b0d8377c477749a9ea438aa74f1e33f8ac9047df447ca"} Jan 26 16:58:32 crc kubenswrapper[4856]: I0126 16:58:32.595192 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d65b21fc101230cb18ee921fc481e83c944dde8fe01074931b90551e082ee249"} Jan 26 16:58:32 crc kubenswrapper[4856]: I0126 16:58:32.595229 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:32 crc kubenswrapper[4856]: I0126 16:58:32.596004 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:32 crc kubenswrapper[4856]: I0126 16:58:32.596030 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:32 crc kubenswrapper[4856]: I0126 16:58:32.596039 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:32 crc kubenswrapper[4856]: I0126 16:58:32.596482 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 16:58:32 crc kubenswrapper[4856]: I0126 16:58:32.598024 4856 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9fc5bd8ccf4d2f104d1ef654e18a5851e0cd141cea5247a692b0bdf92c390b4f" exitCode=255 Jan 26 16:58:32 crc kubenswrapper[4856]: I0126 16:58:32.598071 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"9fc5bd8ccf4d2f104d1ef654e18a5851e0cd141cea5247a692b0bdf92c390b4f"} Jan 26 16:58:32 crc kubenswrapper[4856]: I0126 16:58:32.598154 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:32 crc kubenswrapper[4856]: I0126 16:58:32.598976 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:32 crc kubenswrapper[4856]: I0126 16:58:32.599013 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:32 crc kubenswrapper[4856]: I0126 16:58:32.599026 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:32 crc kubenswrapper[4856]: I0126 16:58:32.599680 4856 scope.go:117] "RemoveContainer" containerID="9fc5bd8ccf4d2f104d1ef654e18a5851e0cd141cea5247a692b0bdf92c390b4f" Jan 26 16:58:33 crc kubenswrapper[4856]: I0126 16:58:33.349252 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 23:31:18.676106816 +0000 UTC Jan 26 16:58:33 crc kubenswrapper[4856]: I0126 16:58:33.414288 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:58:33 crc kubenswrapper[4856]: I0126 16:58:33.507626 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 16:58:33 crc kubenswrapper[4856]: I0126 16:58:33.507806 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:33 crc kubenswrapper[4856]: I0126 16:58:33.508986 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:33 crc kubenswrapper[4856]: I0126 16:58:33.509035 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:33 crc kubenswrapper[4856]: I0126 16:58:33.509046 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:33 crc kubenswrapper[4856]: I0126 16:58:33.515148 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 16:58:33 crc kubenswrapper[4856]: I0126 16:58:33.602982 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 16:58:33 crc kubenswrapper[4856]: I0126 16:58:33.604581 4856 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 16:58:33 crc kubenswrapper[4856]: I0126 16:58:33.604632 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:33 crc kubenswrapper[4856]: I0126 16:58:33.604640 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:33 crc kubenswrapper[4856]: I0126 16:58:33.604630 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:33 crc kubenswrapper[4856]: I0126 16:58:33.605426 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3f07438e20bdf71c752bb661084c835341999c561bb5442d75e177223881276f"} Jan 26 16:58:33 crc kubenswrapper[4856]: I0126 16:58:33.605868 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:33 crc kubenswrapper[4856]: I0126 16:58:33.605877 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:33 crc kubenswrapper[4856]: I0126 16:58:33.605896 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:33 crc kubenswrapper[4856]: I0126 16:58:33.605906 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:33 crc kubenswrapper[4856]: I0126 16:58:33.605913 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:33 crc kubenswrapper[4856]: I0126 16:58:33.605920 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:33 crc kubenswrapper[4856]: I0126 16:58:33.605960 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:33 crc kubenswrapper[4856]: I0126 16:58:33.605978 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:33 crc kubenswrapper[4856]: I0126 16:58:33.605937 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:33 crc kubenswrapper[4856]: I0126 16:58:33.714256 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 16:58:33 crc kubenswrapper[4856]: I0126 16:58:33.714462 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:33 crc kubenswrapper[4856]: I0126 16:58:33.720443 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:33 crc kubenswrapper[4856]: I0126 16:58:33.721012 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:33 crc kubenswrapper[4856]: I0126 16:58:33.721058 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:33 crc kubenswrapper[4856]: I0126 16:58:33.789229 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:58:34 crc kubenswrapper[4856]: I0126 16:58:34.350217 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 19:29:03.833767763 +0000 UTC Jan 26 16:58:34 crc kubenswrapper[4856]: I0126 16:58:34.608001 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:34 crc kubenswrapper[4856]: I0126 16:58:34.609332 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:34 crc kubenswrapper[4856]: I0126 16:58:34.609415 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:34 crc kubenswrapper[4856]: I0126 16:58:34.609438 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:35 crc kubenswrapper[4856]: I0126 16:58:35.201616 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:58:35 crc kubenswrapper[4856]: I0126 16:58:35.350953 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 22:13:43.255123026 +0000 UTC Jan 26 16:58:35 crc kubenswrapper[4856]: E0126 16:58:35.446680 4856 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 16:58:35 crc kubenswrapper[4856]: I0126 16:58:35.611143 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:35 crc kubenswrapper[4856]: I0126 16:58:35.612283 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:35 crc kubenswrapper[4856]: I0126 16:58:35.612324 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:35 crc kubenswrapper[4856]: I0126 16:58:35.612336 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:36 crc kubenswrapper[4856]: I0126 16:58:36.049234 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:58:36 crc kubenswrapper[4856]: I0126 16:58:36.351502 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 04:43:27.483740302 +0000 UTC Jan 26 16:58:36 crc kubenswrapper[4856]: I0126 16:58:36.614218 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:36 crc kubenswrapper[4856]: I0126 16:58:36.615453 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:36 crc kubenswrapper[4856]: I0126 16:58:36.615506 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:36 crc kubenswrapper[4856]: I0126 16:58:36.615545 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:37 crc kubenswrapper[4856]: I0126 16:58:37.352310 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 01:24:49.04625942 +0000 UTC Jan 26 16:58:37 crc kubenswrapper[4856]: I0126 16:58:37.405489 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 26 16:58:37 crc kubenswrapper[4856]: I0126 16:58:37.405824 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:37 crc kubenswrapper[4856]: I0126 16:58:37.407925 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:37 crc kubenswrapper[4856]: I0126 16:58:37.407983 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:37 crc kubenswrapper[4856]: I0126 16:58:37.407999 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:37 crc kubenswrapper[4856]: I0126 16:58:37.617701 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:37 crc kubenswrapper[4856]: I0126 16:58:37.619119 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:37 crc kubenswrapper[4856]: I0126 16:58:37.619173 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:37 crc kubenswrapper[4856]: I0126 16:58:37.619192 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:38 crc kubenswrapper[4856]: I0126 16:58:38.182136 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 16:58:38 crc kubenswrapper[4856]: I0126 16:58:38.182374 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:38 crc kubenswrapper[4856]: I0126 16:58:38.183597 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:38 crc kubenswrapper[4856]: I0126 16:58:38.183635 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:38 crc kubenswrapper[4856]: I0126 16:58:38.183645 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:38 crc kubenswrapper[4856]: I0126 16:58:38.186965 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 16:58:38 crc kubenswrapper[4856]: I0126 16:58:38.226516 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:38 crc kubenswrapper[4856]: I0126 16:58:38.228680 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:38 crc kubenswrapper[4856]: I0126 16:58:38.228766 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:38 crc kubenswrapper[4856]: I0126 16:58:38.228792 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:38 crc kubenswrapper[4856]: I0126 16:58:38.228843 4856 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 16:58:38 crc kubenswrapper[4856]: I0126 16:58:38.337723 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 26 16:58:38 crc kubenswrapper[4856]: I0126 16:58:38.338137 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:38 crc kubenswrapper[4856]: I0126 16:58:38.339671 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:38 crc kubenswrapper[4856]: I0126 16:58:38.339728 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:38 crc kubenswrapper[4856]: I0126 16:58:38.339739 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:38 crc kubenswrapper[4856]: I0126 16:58:38.352803 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 20:33:33.30325777 +0000 UTC Jan 26 16:58:38 crc kubenswrapper[4856]: I0126 16:58:38.421887 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 16:58:38 crc kubenswrapper[4856]: I0126 16:58:38.619420 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:38 crc kubenswrapper[4856]: I0126 16:58:38.620215 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:38 crc kubenswrapper[4856]: I0126 16:58:38.620254 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:38 crc kubenswrapper[4856]: I0126 16:58:38.620271 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:39 crc kubenswrapper[4856]: I0126 16:58:39.353926 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 00:08:10.679492684 +0000 UTC Jan 26 16:58:39 crc kubenswrapper[4856]: I0126 16:58:39.624885 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:39 crc kubenswrapper[4856]: I0126 16:58:39.626206 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:39 crc kubenswrapper[4856]: I0126 16:58:39.626250 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:39 crc kubenswrapper[4856]: I0126 16:58:39.626260 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:39 crc kubenswrapper[4856]: I0126 16:58:39.921001 4856 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 16:58:40 crc kubenswrapper[4856]: I0126 16:58:40.355082 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 01:07:11.400669633 +0000 UTC Jan 26 16:58:41 crc kubenswrapper[4856]: I0126 16:58:41.355804 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 08:21:18.503807651 +0000 UTC Jan 26 16:58:41 crc kubenswrapper[4856]: I0126 16:58:41.422966 4856 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 16:58:41 crc kubenswrapper[4856]: I0126 16:58:41.423111 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 16:58:42 crc kubenswrapper[4856]: I0126 16:58:42.300955 4856 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 26 16:58:42 crc kubenswrapper[4856]: I0126 16:58:42.356516 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 07:06:34.332020793 +0000 UTC Jan 26 16:58:43 crc kubenswrapper[4856]: W0126 16:58:43.143270 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 26 16:58:43 crc kubenswrapper[4856]: I0126 16:58:43.143386 4856 trace.go:236] Trace[1140545813]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 16:58:33.140) (total time: 10002ms): Jan 26 16:58:43 crc kubenswrapper[4856]: Trace[1140545813]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (16:58:43.143) Jan 26 16:58:43 crc kubenswrapper[4856]: Trace[1140545813]: [10.002586682s] [10.002586682s] END Jan 26 16:58:43 crc kubenswrapper[4856]: E0126 16:58:43.143412 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 26 16:58:43 crc kubenswrapper[4856]: I0126 16:58:43.357281 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 23:11:27.756708831 +0000 UTC Jan 26 16:58:43 crc kubenswrapper[4856]: I0126 16:58:43.415851 4856 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 26 16:58:43 crc kubenswrapper[4856]: I0126 16:58:43.415939 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 26 16:58:43 crc kubenswrapper[4856]: W0126 16:58:43.527223 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 26 16:58:43 crc kubenswrapper[4856]: I0126 16:58:43.527388 4856 trace.go:236] Trace[11673521]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 16:58:33.526) (total time: 10001ms): Jan 26 16:58:43 crc kubenswrapper[4856]: Trace[11673521]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (16:58:43.527) Jan 26 16:58:43 crc kubenswrapper[4856]: Trace[11673521]: [10.001270456s] [10.001270456s] END Jan 26 16:58:43 crc kubenswrapper[4856]: E0126 16:58:43.527431 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 26 16:58:44 crc kubenswrapper[4856]: W0126 16:58:44.009189 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 26 16:58:44 crc kubenswrapper[4856]: I0126 16:58:44.009339 4856 trace.go:236] Trace[671209671]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 16:58:34.007) (total time: 10002ms): Jan 26 16:58:44 crc kubenswrapper[4856]: Trace[671209671]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (16:58:44.009) Jan 26 16:58:44 crc kubenswrapper[4856]: Trace[671209671]: [10.002222243s] [10.002222243s] END Jan 26 16:58:44 crc kubenswrapper[4856]: E0126 16:58:44.009374 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 26 16:58:44 crc kubenswrapper[4856]: W0126 16:58:44.189567 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 26 16:58:44 crc kubenswrapper[4856]: I0126 16:58:44.189683 4856 trace.go:236] Trace[185470342]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 16:58:34.187) (total time: 10001ms): Jan 26 16:58:44 crc kubenswrapper[4856]: Trace[185470342]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (16:58:44.189) Jan 26 16:58:44 crc kubenswrapper[4856]: Trace[185470342]: [10.001879713s] [10.001879713s] END Jan 26 16:58:44 crc kubenswrapper[4856]: E0126 16:58:44.189714 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 26 16:58:44 crc kubenswrapper[4856]: I0126 16:58:44.357717 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 02:30:39.371218529 +0000 UTC Jan 26 16:58:45 crc kubenswrapper[4856]: I0126 16:58:45.201357 4856 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 16:58:45 crc kubenswrapper[4856]: I0126 16:58:45.201465 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 16:58:45 crc kubenswrapper[4856]: I0126 16:58:45.358636 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 14:00:31.258866887 +0000 UTC Jan 26 16:58:45 crc kubenswrapper[4856]: E0126 16:58:45.485091 4856 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 16:58:45 crc kubenswrapper[4856]: I0126 16:58:45.802415 4856 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 26 16:58:45 crc kubenswrapper[4856]: I0126 16:58:45.802544 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 26 16:58:46 crc kubenswrapper[4856]: I0126 16:58:46.359618 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 08:27:46.730595175 +0000 UTC Jan 26 16:58:47 crc kubenswrapper[4856]: I0126 16:58:47.361279 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 09:06:06.996746965 +0000 UTC Jan 26 16:58:48 crc kubenswrapper[4856]: I0126 16:58:48.361830 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 20:22:08.932049323 +0000 UTC Jan 26 16:58:48 crc kubenswrapper[4856]: I0126 16:58:48.371016 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 26 16:58:48 crc kubenswrapper[4856]: I0126 16:58:48.371298 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:48 crc kubenswrapper[4856]: I0126 16:58:48.372822 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:48 crc kubenswrapper[4856]: I0126 16:58:48.372877 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:48 crc kubenswrapper[4856]: I0126 16:58:48.372895 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:48 crc kubenswrapper[4856]: I0126 16:58:48.382916 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 26 16:58:48 crc kubenswrapper[4856]: I0126 16:58:48.650825 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:48 crc kubenswrapper[4856]: I0126 16:58:48.651825 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:48 crc kubenswrapper[4856]: I0126 16:58:48.651869 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:48 crc kubenswrapper[4856]: I0126 16:58:48.651878 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:49 crc kubenswrapper[4856]: I0126 16:58:49.362670 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 08:26:10.529429801 +0000 UTC Jan 26 16:58:50 crc kubenswrapper[4856]: I0126 16:58:50.207292 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:58:50 crc kubenswrapper[4856]: I0126 16:58:50.207606 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:50 crc kubenswrapper[4856]: I0126 16:58:50.209358 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:50 crc kubenswrapper[4856]: I0126 16:58:50.209419 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:50 crc kubenswrapper[4856]: I0126 16:58:50.209433 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:50 crc kubenswrapper[4856]: I0126 16:58:50.212569 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:58:50 crc kubenswrapper[4856]: I0126 16:58:50.363129 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 02:32:28.264500945 +0000 UTC Jan 26 16:58:50 crc kubenswrapper[4856]: I0126 16:58:50.656925 4856 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 16:58:50 crc kubenswrapper[4856]: I0126 16:58:50.656985 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:50 crc kubenswrapper[4856]: I0126 16:58:50.658216 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:50 crc kubenswrapper[4856]: I0126 16:58:50.658246 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:50 crc kubenswrapper[4856]: I0126 16:58:50.658258 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:50 crc kubenswrapper[4856]: E0126 16:58:50.796496 4856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="7s" Jan 26 16:58:50 crc kubenswrapper[4856]: E0126 16:58:50.800819 4856 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 26 16:58:50 crc kubenswrapper[4856]: I0126 16:58:50.801753 4856 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 26 16:58:50 crc kubenswrapper[4856]: I0126 16:58:50.847364 4856 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:40202->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 26 16:58:50 crc kubenswrapper[4856]: I0126 16:58:50.847432 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:40202->192.168.126.11:17697: read: connection reset by peer" Jan 26 16:58:50 crc kubenswrapper[4856]: I0126 16:58:50.847860 4856 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 26 16:58:50 crc kubenswrapper[4856]: I0126 16:58:50.847909 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 26 16:58:51 crc kubenswrapper[4856]: I0126 16:58:51.179478 4856 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 26 16:58:51 crc kubenswrapper[4856]: I0126 16:58:51.229938 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 16:58:51 crc kubenswrapper[4856]: I0126 16:58:51.230217 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:51 crc kubenswrapper[4856]: I0126 16:58:51.232129 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:51 crc kubenswrapper[4856]: I0126 16:58:51.232184 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:51 crc kubenswrapper[4856]: I0126 16:58:51.232198 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:51 crc kubenswrapper[4856]: I0126 16:58:51.235399 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 16:58:51 crc kubenswrapper[4856]: I0126 16:58:51.363464 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 04:42:42.097215514 +0000 UTC Jan 26 16:58:51 crc kubenswrapper[4856]: I0126 16:58:51.662132 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 26 16:58:51 crc kubenswrapper[4856]: I0126 16:58:51.662716 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 16:58:51 crc kubenswrapper[4856]: I0126 16:58:51.664855 4856 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3f07438e20bdf71c752bb661084c835341999c561bb5442d75e177223881276f" exitCode=255 Jan 26 16:58:51 crc kubenswrapper[4856]: I0126 16:58:51.664952 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3f07438e20bdf71c752bb661084c835341999c561bb5442d75e177223881276f"} Jan 26 16:58:51 crc kubenswrapper[4856]: I0126 16:58:51.664973 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:51 crc kubenswrapper[4856]: I0126 16:58:51.665070 4856 scope.go:117] "RemoveContainer" containerID="9fc5bd8ccf4d2f104d1ef654e18a5851e0cd141cea5247a692b0bdf92c390b4f" Jan 26 16:58:51 crc kubenswrapper[4856]: I0126 16:58:51.665571 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:51 crc kubenswrapper[4856]: I0126 16:58:51.665999 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:51 crc kubenswrapper[4856]: I0126 16:58:51.666058 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:51 crc kubenswrapper[4856]: I0126 16:58:51.666072 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:51 crc kubenswrapper[4856]: I0126 16:58:51.666889 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:51 crc kubenswrapper[4856]: I0126 16:58:51.666928 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:51 crc kubenswrapper[4856]: I0126 16:58:51.666942 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:51 crc kubenswrapper[4856]: I0126 16:58:51.667702 4856 scope.go:117] "RemoveContainer" containerID="3f07438e20bdf71c752bb661084c835341999c561bb5442d75e177223881276f" Jan 26 16:58:51 crc kubenswrapper[4856]: E0126 16:58:51.667930 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 26 16:58:52 crc kubenswrapper[4856]: I0126 16:58:52.363849 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 05:25:06.758196884 +0000 UTC Jan 26 16:58:52 crc kubenswrapper[4856]: I0126 16:58:52.660177 4856 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 16:58:52 crc kubenswrapper[4856]: I0126 16:58:52.670750 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 26 16:58:53 crc kubenswrapper[4856]: I0126 16:58:53.402081 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 05:10:39.45398496 +0000 UTC Jan 26 16:58:53 crc kubenswrapper[4856]: I0126 16:58:53.415125 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:58:53 crc kubenswrapper[4856]: I0126 16:58:53.415377 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:53 crc kubenswrapper[4856]: I0126 16:58:53.416609 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:53 crc kubenswrapper[4856]: I0126 16:58:53.416648 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:53 crc kubenswrapper[4856]: I0126 16:58:53.416662 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:53 crc kubenswrapper[4856]: I0126 16:58:53.417412 4856 scope.go:117] "RemoveContainer" containerID="3f07438e20bdf71c752bb661084c835341999c561bb5442d75e177223881276f" Jan 26 16:58:53 crc kubenswrapper[4856]: E0126 16:58:53.417658 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 26 16:58:53 crc kubenswrapper[4856]: I0126 16:58:53.518074 4856 csr.go:261] certificate signing request csr-v4mqc is approved, waiting to be issued Jan 26 16:58:53 crc kubenswrapper[4856]: I0126 16:58:53.526238 4856 csr.go:257] certificate signing request csr-v4mqc is issued Jan 26 16:58:53 crc kubenswrapper[4856]: I0126 16:58:53.561825 4856 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 16:58:53 crc kubenswrapper[4856]: I0126 16:58:53.795607 4856 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.293913 4856 apiserver.go:52] "Watching apiserver" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.457562 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 03:27:12.045590418 +0000 UTC Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.506463 4856 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.507073 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-t4fq2","openshift-image-registry/node-ca-tp5hk","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.508231 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-tp5hk" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.508832 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.509268 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:58:54 crc kubenswrapper[4856]: E0126 16:58:54.509327 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.509723 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:58:54 crc kubenswrapper[4856]: E0126 16:58:54.509776 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.509867 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.510347 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.510782 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:58:54 crc kubenswrapper[4856]: E0126 16:58:54.510834 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.510895 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-t4fq2" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.547636 4856 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-26 16:53:53 +0000 UTC, rotation deadline is 2026-11-12 20:49:32.130259927 +0000 UTC Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.547695 4856 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6963h50m37.582568216s for next certificate rotation Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.571868 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.572042 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.572215 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.572295 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.572042 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.572542 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.572569 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.572617 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.572799 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.575746 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.575943 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.576171 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.576397 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.576614 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.581179 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.582890 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.590307 4856 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.600886 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.605506 4856 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.616480 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.627299 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.637370 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.651704 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.658599 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.658648 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.658681 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.658708 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.658749 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.658766 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.658784 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.658798 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.658818 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.658833 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.658927 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.658946 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.658986 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659006 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659023 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659045 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659043 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659099 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659044 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659159 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659178 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659213 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659229 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659240 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659253 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659296 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659313 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659329 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659344 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659376 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659392 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659407 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659422 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659455 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659476 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659491 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659501 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659507 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659597 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659640 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659657 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659677 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659696 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659691 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659745 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659762 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659746 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659781 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659798 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659816 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659833 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659875 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659893 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659910 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659931 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659946 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659962 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659977 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659992 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660007 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660022 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660047 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660063 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660078 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660095 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660117 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660133 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660153 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660170 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660194 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660216 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660254 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660273 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660288 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660305 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660325 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660359 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660385 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660403 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660420 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660441 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660463 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660505 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660550 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660577 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660598 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660618 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660636 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660685 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660720 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660751 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660792 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660820 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660860 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660892 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660918 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660944 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660979 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661001 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661030 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661064 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661090 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661112 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661156 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661181 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661204 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661226 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661248 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661269 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661329 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661352 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661381 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661408 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661440 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661464 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661507 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661554 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661577 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661601 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661629 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661665 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661688 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661711 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661799 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661830 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661861 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661892 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661915 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661937 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661964 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661985 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.662012 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.662038 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.662074 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.662098 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.662119 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.662157 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.662179 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.662201 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.662223 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.662255 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.662280 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.662309 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.662346 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.662367 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.662389 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.662412 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.662475 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.662500 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.662995 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.663091 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.663142 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.663171 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.663224 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.663251 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.663276 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.663301 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.663372 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.663403 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.663429 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.663455 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.663487 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.663520 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.663561 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.663586 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.663618 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.663643 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.663676 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.663699 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.663724 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.663747 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.663781 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.663817 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.663849 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.663873 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.663967 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.663995 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.664018 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.664043 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.664068 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.664093 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.664136 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.664163 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.664230 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.664257 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.664290 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.664322 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.664346 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.664370 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.664399 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.664423 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.664450 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.664476 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.664510 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.664554 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.664581 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.664607 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.664662 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.664697 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.664724 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.664755 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.664779 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.664864 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.664912 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.664939 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8f28414c-12c1-4adb-be7b-6182310828eb-serviceca\") pod \"node-ca-tp5hk\" (UID: \"8f28414c-12c1-4adb-be7b-6182310828eb\") " pod="openshift-image-registry/node-ca-tp5hk" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.664976 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5swq\" (UniqueName: \"kubernetes.io/projected/8d21ac89-2ebd-49c3-9fe0-6c3f352d2257-kube-api-access-p5swq\") pod \"node-resolver-t4fq2\" (UID: \"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\") " pod="openshift-dns/node-resolver-t4fq2" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.665003 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.665044 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.665067 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzc59\" (UniqueName: \"kubernetes.io/projected/8f28414c-12c1-4adb-be7b-6182310828eb-kube-api-access-zzc59\") pod \"node-ca-tp5hk\" (UID: \"8f28414c-12c1-4adb-be7b-6182310828eb\") " pod="openshift-image-registry/node-ca-tp5hk" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.665093 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.665116 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8f28414c-12c1-4adb-be7b-6182310828eb-host\") pod \"node-ca-tp5hk\" (UID: \"8f28414c-12c1-4adb-be7b-6182310828eb\") " pod="openshift-image-registry/node-ca-tp5hk" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.665148 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.665182 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.665239 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.665267 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.665291 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8d21ac89-2ebd-49c3-9fe0-6c3f352d2257-hosts-file\") pod \"node-resolver-t4fq2\" (UID: \"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\") " pod="openshift-dns/node-resolver-t4fq2" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.665318 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.665343 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.665367 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.665405 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.665430 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.665553 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.665578 4856 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.665595 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.665609 4856 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.665623 4856 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.665637 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.665652 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659764 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659848 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659864 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.659990 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660009 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660019 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660152 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660257 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660342 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660348 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660510 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660705 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660708 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660866 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.660910 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661021 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661187 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661182 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661391 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661408 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661505 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661611 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661676 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661706 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661800 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661904 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.661964 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.662032 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.662060 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.662108 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.662227 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.662483 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.662493 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.662817 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.662863 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.663025 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.663059 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.663094 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.663277 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.663363 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.663373 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.663378 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.663485 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.663624 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.663782 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.663802 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.664020 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.664059 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.664269 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.664421 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.664458 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.664490 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.664690 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.665899 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.666031 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.666547 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.666769 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.667034 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.667144 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.667143 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.667188 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.667223 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.667592 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.667865 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.668006 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.668069 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.668077 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.668090 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.668351 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.669133 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.669210 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.669329 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.669450 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.669511 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.669680 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.669617 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.670214 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.670480 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.671503 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.672726 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.672895 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.673059 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.673438 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.673919 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.674314 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.674440 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.674802 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.674947 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.679857 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.680266 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.680545 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.680815 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.681923 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.682591 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.682999 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.683235 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.683630 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.684065 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.685417 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.687768 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.688080 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.688893 4856 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 26 16:58:54 crc kubenswrapper[4856]: E0126 16:58:54.689171 4856 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.689232 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.689261 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.689500 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.689910 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.693184 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.703786 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.724075 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.724345 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: E0126 16:58:54.724478 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:58:55.19097717 +0000 UTC m=+31.144231151 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.724679 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.725431 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.726795 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.729933 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.730007 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.730303 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.688826 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.730441 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.726477 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.730784 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.688885 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.730687 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.731035 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.731416 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.730884 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.732769 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.733115 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.733190 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.689094 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.733484 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: E0126 16:58:54.733500 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 16:58:55.233479199 +0000 UTC m=+31.186733180 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.733554 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.733801 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.733928 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.733920 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.734033 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.734051 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.734096 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.734284 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.734442 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.734504 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.734849 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.734971 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: E0126 16:58:54.735082 4856 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 16:58:54 crc kubenswrapper[4856]: E0126 16:58:54.735153 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 16:58:55.235131344 +0000 UTC m=+31.188385325 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.735454 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.735646 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.735663 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.735869 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.736110 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.736376 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.736608 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.736637 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.736996 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.737138 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.737243 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.737323 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.737477 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.737575 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.737805 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.738231 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.738309 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.738548 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.738900 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.739029 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.739466 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.739468 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.739546 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.739923 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.740003 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.747101 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.747384 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.747775 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.748102 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.748374 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.748651 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.748759 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.749085 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.749264 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.749273 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.749497 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: E0126 16:58:54.749646 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 16:58:54 crc kubenswrapper[4856]: E0126 16:58:54.749707 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.749719 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: E0126 16:58:54.749728 4856 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:58:54 crc kubenswrapper[4856]: E0126 16:58:54.750049 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 16:58:55.250014753 +0000 UTC m=+31.203268924 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.750155 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.750175 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.750194 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.750279 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.750290 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.704487 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.698382 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.750711 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.750777 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.755731 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.757567 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.759086 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.759450 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.759843 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:58:54 crc kubenswrapper[4856]: E0126 16:58:54.759965 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 16:58:54 crc kubenswrapper[4856]: E0126 16:58:54.760023 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 16:58:54 crc kubenswrapper[4856]: E0126 16:58:54.760049 4856 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:58:54 crc kubenswrapper[4856]: E0126 16:58:54.760213 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 16:58:55.260179322 +0000 UTC m=+31.213433463 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.764902 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.767557 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.767993 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzc59\" (UniqueName: \"kubernetes.io/projected/8f28414c-12c1-4adb-be7b-6182310828eb-kube-api-access-zzc59\") pod \"node-ca-tp5hk\" (UID: \"8f28414c-12c1-4adb-be7b-6182310828eb\") " pod="openshift-image-registry/node-ca-tp5hk" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768027 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8d21ac89-2ebd-49c3-9fe0-6c3f352d2257-hosts-file\") pod \"node-resolver-t4fq2\" (UID: \"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\") " pod="openshift-dns/node-resolver-t4fq2" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768046 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8f28414c-12c1-4adb-be7b-6182310828eb-host\") pod \"node-ca-tp5hk\" (UID: \"8f28414c-12c1-4adb-be7b-6182310828eb\") " pod="openshift-image-registry/node-ca-tp5hk" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768085 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768100 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768114 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8f28414c-12c1-4adb-be7b-6182310828eb-serviceca\") pod \"node-ca-tp5hk\" (UID: \"8f28414c-12c1-4adb-be7b-6182310828eb\") " pod="openshift-image-registry/node-ca-tp5hk" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768128 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5swq\" (UniqueName: \"kubernetes.io/projected/8d21ac89-2ebd-49c3-9fe0-6c3f352d2257-kube-api-access-p5swq\") pod \"node-resolver-t4fq2\" (UID: \"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\") " pod="openshift-dns/node-resolver-t4fq2" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768173 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768187 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768201 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768213 4856 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768224 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768234 4856 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768245 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768255 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768266 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768276 4856 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768285 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768293 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768302 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768310 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768320 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768329 4856 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768337 4856 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768346 4856 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768354 4856 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768362 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768371 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768383 4856 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768392 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768402 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768410 4856 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768418 4856 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768426 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768434 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768442 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768451 4856 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768459 4856 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768467 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768477 4856 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768485 4856 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768494 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768501 4856 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768509 4856 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768518 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768541 4856 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768550 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768559 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768567 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768576 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768585 4856 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768594 4856 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768602 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768611 4856 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768621 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768629 4856 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768637 4856 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768645 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768654 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768662 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768672 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768680 4856 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768688 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768696 4856 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768704 4856 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768715 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768724 4856 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768722 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768734 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768771 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768790 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768804 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768818 4856 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768830 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768842 4856 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768853 4856 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768865 4856 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768876 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768888 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768900 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768925 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768937 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768950 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768965 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768983 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.768997 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.769487 4856 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.769510 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.769522 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.769581 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.769592 4856 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.769601 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8d21ac89-2ebd-49c3-9fe0-6c3f352d2257-hosts-file\") pod \"node-resolver-t4fq2\" (UID: \"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\") " pod="openshift-dns/node-resolver-t4fq2" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.769605 4856 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.769637 4856 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.769643 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.769685 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.769746 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8f28414c-12c1-4adb-be7b-6182310828eb-host\") pod \"node-ca-tp5hk\" (UID: \"8f28414c-12c1-4adb-be7b-6182310828eb\") " pod="openshift-image-registry/node-ca-tp5hk" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.769649 4856 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.769799 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.769812 4856 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.769824 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.769836 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.769849 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.769861 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.769873 4856 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.769883 4856 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.769895 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.769906 4856 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.769923 4856 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.769933 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.769944 4856 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.769954 4856 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.769964 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.769974 4856 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.769983 4856 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.769992 4856 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770002 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770014 4856 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770026 4856 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770038 4856 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770049 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770060 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770071 4856 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770081 4856 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770091 4856 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770102 4856 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770112 4856 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770122 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770133 4856 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770146 4856 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770157 4856 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770180 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770192 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770204 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770216 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770228 4856 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770245 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770257 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770267 4856 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770279 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770289 4856 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770300 4856 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770313 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770337 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770349 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770361 4856 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770373 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770385 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770396 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770409 4856 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770420 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770430 4856 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770442 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770452 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770462 4856 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770474 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770484 4856 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770494 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770503 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770513 4856 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770538 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770550 4856 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770559 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770568 4856 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770576 4856 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770583 4856 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770593 4856 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770602 4856 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770610 4856 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770619 4856 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770628 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770640 4856 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770647 4856 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770656 4856 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770664 4856 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770672 4856 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770681 4856 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770691 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770700 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770709 4856 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770718 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770726 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770735 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770744 4856 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770753 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770760 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770768 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770776 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770785 4856 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770793 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770801 4856 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770809 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770817 4856 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770825 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770835 4856 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770843 4856 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.770851 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.773225 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.773566 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8f28414c-12c1-4adb-be7b-6182310828eb-serviceca\") pod \"node-ca-tp5hk\" (UID: \"8f28414c-12c1-4adb-be7b-6182310828eb\") " pod="openshift-image-registry/node-ca-tp5hk" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.782486 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.792161 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.796175 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzc59\" (UniqueName: \"kubernetes.io/projected/8f28414c-12c1-4adb-be7b-6182310828eb-kube-api-access-zzc59\") pod \"node-ca-tp5hk\" (UID: \"8f28414c-12c1-4adb-be7b-6182310828eb\") " pod="openshift-image-registry/node-ca-tp5hk" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.796732 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.797827 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5swq\" (UniqueName: \"kubernetes.io/projected/8d21ac89-2ebd-49c3-9fe0-6c3f352d2257-kube-api-access-p5swq\") pod \"node-resolver-t4fq2\" (UID: \"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\") " pod="openshift-dns/node-resolver-t4fq2" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.812432 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.823562 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.837066 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.847323 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.852596 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-tp5hk" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.863097 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.863774 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:58:54 crc kubenswrapper[4856]: W0126 16:58:54.867994 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f28414c_12c1_4adb_be7b_6182310828eb.slice/crio-a37e0c99cc001b0c3bf3066fb8d729ead73f8dac4bab47ada43a1f8f0f83aeb9 WatchSource:0}: Error finding container a37e0c99cc001b0c3bf3066fb8d729ead73f8dac4bab47ada43a1f8f0f83aeb9: Status 404 returned error can't find the container with id a37e0c99cc001b0c3bf3066fb8d729ead73f8dac4bab47ada43a1f8f0f83aeb9 Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.871620 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.871646 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.877232 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.879421 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.895223 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 16:58:54 crc kubenswrapper[4856]: I0126 16:58:54.895230 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:54.901879 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-t4fq2" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:54.931772 4856 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 26 16:58:55 crc kubenswrapper[4856]: W0126 16:58:54.931967 4856 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"iptables-alerter-script": Unexpected watch close - watch lasted less than a second and no items received Jan 26 16:58:55 crc kubenswrapper[4856]: W0126 16:58:54.932001 4856 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 16:58:55 crc kubenswrapper[4856]: W0126 16:58:54.932022 4856 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"ovnkube-identity-cm": Unexpected watch close - watch lasted less than a second and no items received Jan 26 16:58:55 crc kubenswrapper[4856]: W0126 16:58:54.932069 4856 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Jan 26 16:58:55 crc kubenswrapper[4856]: E0126 16:58:54.932123 4856 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/events\": read tcp 38.102.83.241:34646->38.102.83.241:6443: use of closed network connection" event="&Event{ObjectMeta:{network-node-identity-vrzqb.188e566e339fe4da openshift-network-node-identity 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-network-node-identity,Name:network-node-identity-vrzqb,UID:ef543e1b-8068-4ea3-b32a-61027b32e95d,APIVersion:v1,ResourceVersion:25324,FieldPath:spec.containers{webhook},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 16:58:54.915691738 +0000 UTC m=+30.868945709,LastTimestamp:2026-01-26 16:58:54.915691738 +0000 UTC m=+30.868945709,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 16:58:55 crc kubenswrapper[4856]: W0126 16:58:54.932260 4856 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 16:58:55 crc kubenswrapper[4856]: W0126 16:58:54.932281 4856 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 16:58:55 crc kubenswrapper[4856]: W0126 16:58:54.932303 4856 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-network-operator"/"metrics-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 26 16:58:55 crc kubenswrapper[4856]: W0126 16:58:54.932323 4856 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 16:58:55 crc kubenswrapper[4856]: W0126 16:58:54.932348 4856 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 26 16:58:55 crc kubenswrapper[4856]: W0126 16:58:54.932394 4856 reflector.go:484] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": watch of *v1.Secret ended with: very short watch: object-"openshift-image-registry"/"node-ca-dockercfg-4777p": Unexpected watch close - watch lasted less than a second and no items received Jan 26 16:58:55 crc kubenswrapper[4856]: W0126 16:58:54.932416 4856 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 26 16:58:55 crc kubenswrapper[4856]: W0126 16:58:54.933100 4856 reflector.go:484] pkg/kubelet/config/apiserver.go:66: watch of *v1.Pod ended with: very short watch: pkg/kubelet/config/apiserver.go:66: Unexpected watch close - watch lasted less than a second and no items received Jan 26 16:58:55 crc kubenswrapper[4856]: W0126 16:58:54.933126 4856 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-network-node-identity"/"network-node-identity-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 26 16:58:55 crc kubenswrapper[4856]: W0126 16:58:54.933148 4856 reflector.go:484] object-"openshift-image-registry"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 16:58:55 crc kubenswrapper[4856]: W0126 16:58:54.933183 4856 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 16:58:55 crc kubenswrapper[4856]: W0126 16:58:54.933202 4856 reflector.go:484] object-"openshift-image-registry"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 16:58:55 crc kubenswrapper[4856]: W0126 16:58:54.933248 4856 reflector.go:484] object-"openshift-image-registry"/"image-registry-certificates": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"image-registry-certificates": Unexpected watch close - watch lasted less than a second and no items received Jan 26 16:58:55 crc kubenswrapper[4856]: W0126 16:58:54.933269 4856 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 16:58:55 crc kubenswrapper[4856]: W0126 16:58:55.068116 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-8e96a123a536cdbe669eb3f2682c4e4aa9f0a6ed84a372d93b59d4ff124bfbdd WatchSource:0}: Error finding container 8e96a123a536cdbe669eb3f2682c4e4aa9f0a6ed84a372d93b59d4ff124bfbdd: Status 404 returned error can't find the container with id 8e96a123a536cdbe669eb3f2682c4e4aa9f0a6ed84a372d93b59d4ff124bfbdd Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.274131 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.274253 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.274304 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:58:55 crc kubenswrapper[4856]: E0126 16:58:55.274391 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:58:56.274356378 +0000 UTC m=+32.227610359 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:58:55 crc kubenswrapper[4856]: E0126 16:58:55.274482 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 16:58:55 crc kubenswrapper[4856]: E0126 16:58:55.274541 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 16:58:55 crc kubenswrapper[4856]: E0126 16:58:55.274556 4856 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.274516 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:58:55 crc kubenswrapper[4856]: E0126 16:58:55.274624 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 16:58:56.274606484 +0000 UTC m=+32.227860465 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.274650 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:58:55 crc kubenswrapper[4856]: E0126 16:58:55.274677 4856 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 16:58:55 crc kubenswrapper[4856]: E0126 16:58:55.274710 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 16:58:56.274701687 +0000 UTC m=+32.227955668 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 16:58:55 crc kubenswrapper[4856]: E0126 16:58:55.274775 4856 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 16:58:55 crc kubenswrapper[4856]: E0126 16:58:55.274797 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 16:58:56.27479159 +0000 UTC m=+32.228045571 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 16:58:55 crc kubenswrapper[4856]: E0126 16:58:55.274881 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 16:58:55 crc kubenswrapper[4856]: E0126 16:58:55.274903 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 16:58:55 crc kubenswrapper[4856]: E0126 16:58:55.274922 4856 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:58:55 crc kubenswrapper[4856]: E0126 16:58:55.274976 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 16:58:56.274965764 +0000 UTC m=+32.228219925 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.412723 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.413824 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.415807 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.416941 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.419601 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.420630 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.421831 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.423566 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.424451 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.425945 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.426905 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.428218 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.430465 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.431576 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.432945 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.433773 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.434435 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.436101 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.436749 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.437604 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.438598 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.439845 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.440727 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.443033 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.443686 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.445217 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.445841 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.447768 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.448803 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.449400 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.450981 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.451670 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.452835 4856 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.452826 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.453152 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.455295 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.457391 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.457969 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 17:38:09.462757487 +0000 UTC Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.458783 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.460887 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.462128 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.463242 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.464835 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.465897 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.466448 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.467689 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.468894 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.469595 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.470430 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.470957 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.472065 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.472795 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.473704 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.464855 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.474214 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.474707 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.475646 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.476209 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.477075 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.498452 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.564402 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.580648 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.589908 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.870238 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-t4fq2" event={"ID":"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257","Type":"ContainerStarted","Data":"627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f"} Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.870284 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-t4fq2" event={"ID":"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257","Type":"ContainerStarted","Data":"0daeec337d442d6b72d206a5437a7229a7184f72a23ece576a2bf30bb2aee119"} Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.871271 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"8e96a123a536cdbe669eb3f2682c4e4aa9f0a6ed84a372d93b59d4ff124bfbdd"} Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.873743 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"191ad5d41024c88c1a4fbc30f307dd6340fd55b93b00d22f62209d0e82be286f"} Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.873809 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ccf2a4d8be409a46ffe5702797e56a80023a72a19fb7dc5c49e5f4984cedc600"} Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.873841 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"47335579ac42d26a6a37c7e5cae4b0819f8404d34215632c9ea2b86bc72395da"} Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.875845 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.877215 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"3c09748c0963bdb50c9552f249d1135ea53e68c52babd788dad050951cb849cf"} Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.877289 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"1e97d53e095cbf2a8b3483f26f64d24c54d4eca3ae638425541067dd3c3e6c08"} Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.878751 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-tp5hk" event={"ID":"8f28414c-12c1-4adb-be7b-6182310828eb","Type":"ContainerStarted","Data":"0af4930ff9cebd62106545fb3da2dfc93f7b591426fe4c85aa2e637b60c935f7"} Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.878796 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-tp5hk" event={"ID":"8f28414c-12c1-4adb-be7b-6182310828eb","Type":"ContainerStarted","Data":"a37e0c99cc001b0c3bf3066fb8d729ead73f8dac4bab47ada43a1f8f0f83aeb9"} Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.883731 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.884172 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 16:58:55 crc kubenswrapper[4856]: I0126 16:58:55.906972 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.018598 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.021580 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.030014 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.040461 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.041885 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.053004 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.063132 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.074835 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.084737 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.099127 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.119624 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.130377 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af4930ff9cebd62106545fb3da2dfc93f7b591426fe4c85aa2e637b60c935f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.143422 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c09748c0963bdb50c9552f249d1135ea53e68c52babd788dad050951cb849cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.145691 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.149166 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.156094 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.164032 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.167717 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191ad5d41024c88c1a4fbc30f307dd6340fd55b93b00d22f62209d0e82be286f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf2a4d8be409a46ffe5702797e56a80023a72a19fb7dc5c49e5f4984cedc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.179940 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.191635 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.202637 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.271648 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.297117 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:58:56 crc kubenswrapper[4856]: E0126 16:58:56.297400 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:58:58.29736032 +0000 UTC m=+34.250614321 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.297472 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.297565 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.297605 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.297645 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:58:56 crc kubenswrapper[4856]: E0126 16:58:56.297785 4856 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 16:58:56 crc kubenswrapper[4856]: E0126 16:58:56.297839 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 16:58:58.297828303 +0000 UTC m=+34.251082284 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 16:58:56 crc kubenswrapper[4856]: E0126 16:58:56.297847 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 16:58:56 crc kubenswrapper[4856]: E0126 16:58:56.297867 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 16:58:56 crc kubenswrapper[4856]: E0126 16:58:56.297880 4856 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:58:56 crc kubenswrapper[4856]: E0126 16:58:56.297917 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 16:58:58.297903915 +0000 UTC m=+34.251157896 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:58:56 crc kubenswrapper[4856]: E0126 16:58:56.298015 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 16:58:56 crc kubenswrapper[4856]: E0126 16:58:56.298075 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 16:58:56 crc kubenswrapper[4856]: E0126 16:58:56.298097 4856 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:58:56 crc kubenswrapper[4856]: E0126 16:58:56.298175 4856 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 16:58:56 crc kubenswrapper[4856]: E0126 16:58:56.298193 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 16:58:58.298164813 +0000 UTC m=+34.251418814 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:58:56 crc kubenswrapper[4856]: E0126 16:58:56.298366 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 16:58:58.298345127 +0000 UTC m=+34.251599108 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.367989 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.394938 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.394974 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.395008 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:58:56 crc kubenswrapper[4856]: E0126 16:58:56.395059 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:58:56 crc kubenswrapper[4856]: E0126 16:58:56.395141 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:58:56 crc kubenswrapper[4856]: E0126 16:58:56.395249 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.396683 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.419888 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.429918 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.458802 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 01:01:21.939772445 +0000 UTC Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.466305 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.472889 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.595478 4856 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.596341 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-v2l7v","openshift-ovn-kubernetes/ovnkube-node-pxh94","openshift-machine-config-operator/machine-config-daemon-xm9cq","openshift-multus/multus-rq622"] Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.597376 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.599165 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.601029 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.601131 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.601177 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.601349 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.602250 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.613356 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.613430 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.613964 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.614629 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.614707 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.614870 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.616044 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.616206 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.616369 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.616488 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.616541 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.616582 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.616824 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.617031 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.617196 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.617362 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.624833 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:56Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.637807 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af4930ff9cebd62106545fb3da2dfc93f7b591426fe4c85aa2e637b60c935f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:56Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.654829 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c09748c0963bdb50c9552f249d1135ea53e68c52babd788dad050951cb849cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:56Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.670358 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:56Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.683617 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:56Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.695041 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:56Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.700488 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-systemd-units\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.700569 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-etc-kubernetes\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.700590 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-ovnkube-config\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.700612 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm9x6\" (UniqueName: \"kubernetes.io/projected/ad7b59f9-beb7-49d6-a2d1-e29133e46854-kube-api-access-zm9x6\") pod \"multus-additional-cni-plugins-v2l7v\" (UID: \"ad7b59f9-beb7-49d6-a2d1-e29133e46854\") " pod="openshift-multus/multus-additional-cni-plugins-v2l7v" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.700632 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/63c75ede-5170-4db0-811b-5217ef8d72b3-proxy-tls\") pod \"machine-config-daemon-xm9cq\" (UID: \"63c75ede-5170-4db0-811b-5217ef8d72b3\") " pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.700649 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-run-openvswitch\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.700665 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7a742e7b-c420-46e3-9e96-e9c744af6124-multus-daemon-config\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.700772 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ad7b59f9-beb7-49d6-a2d1-e29133e46854-cnibin\") pod \"multus-additional-cni-plugins-v2l7v\" (UID: \"ad7b59f9-beb7-49d6-a2d1-e29133e46854\") " pod="openshift-multus/multus-additional-cni-plugins-v2l7v" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.700842 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-run-systemd\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.700872 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-multus-cni-dir\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.700889 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-os-release\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.700931 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-kubelet\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.700951 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-node-log\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.700968 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-hostroot\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.700985 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ad7b59f9-beb7-49d6-a2d1-e29133e46854-cni-binary-copy\") pod \"multus-additional-cni-plugins-v2l7v\" (UID: \"ad7b59f9-beb7-49d6-a2d1-e29133e46854\") " pod="openshift-multus/multus-additional-cni-plugins-v2l7v" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.701007 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-cni-bin\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.701025 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kdbz\" (UniqueName: \"kubernetes.io/projected/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-kube-api-access-9kdbz\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.701044 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-cnibin\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.701058 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ad7b59f9-beb7-49d6-a2d1-e29133e46854-system-cni-dir\") pod \"multus-additional-cni-plugins-v2l7v\" (UID: \"ad7b59f9-beb7-49d6-a2d1-e29133e46854\") " pod="openshift-multus/multus-additional-cni-plugins-v2l7v" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.701074 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ad7b59f9-beb7-49d6-a2d1-e29133e46854-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-v2l7v\" (UID: \"ad7b59f9-beb7-49d6-a2d1-e29133e46854\") " pod="openshift-multus/multus-additional-cni-plugins-v2l7v" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.701119 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-system-cni-dir\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.701146 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ad7b59f9-beb7-49d6-a2d1-e29133e46854-tuning-conf-dir\") pod \"multus-additional-cni-plugins-v2l7v\" (UID: \"ad7b59f9-beb7-49d6-a2d1-e29133e46854\") " pod="openshift-multus/multus-additional-cni-plugins-v2l7v" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.701164 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/63c75ede-5170-4db0-811b-5217ef8d72b3-rootfs\") pod \"machine-config-daemon-xm9cq\" (UID: \"63c75ede-5170-4db0-811b-5217ef8d72b3\") " pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.701185 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96lw2\" (UniqueName: \"kubernetes.io/projected/63c75ede-5170-4db0-811b-5217ef8d72b3-kube-api-access-96lw2\") pod \"machine-config-daemon-xm9cq\" (UID: \"63c75ede-5170-4db0-811b-5217ef8d72b3\") " pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.701215 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-host-run-k8s-cni-cncf-io\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.701246 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-etc-openvswitch\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.701302 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-run-ovn-kubernetes\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.701363 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7a742e7b-c420-46e3-9e96-e9c744af6124-cni-binary-copy\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.701378 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-host-var-lib-cni-bin\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.701426 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ad7b59f9-beb7-49d6-a2d1-e29133e46854-os-release\") pod \"multus-additional-cni-plugins-v2l7v\" (UID: \"ad7b59f9-beb7-49d6-a2d1-e29133e46854\") " pod="openshift-multus/multus-additional-cni-plugins-v2l7v" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.701462 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.701494 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-host-run-multus-certs\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.701544 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-run-ovn\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.701572 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-log-socket\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.701600 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-ovnkube-script-lib\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.701632 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-multus-socket-dir-parent\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.701649 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-host-run-netns\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.701669 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-host-var-lib-kubelet\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.701686 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-multus-conf-dir\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.701701 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/63c75ede-5170-4db0-811b-5217ef8d72b3-mcd-auth-proxy-config\") pod \"machine-config-daemon-xm9cq\" (UID: \"63c75ede-5170-4db0-811b-5217ef8d72b3\") " pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.701780 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-slash\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.701805 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-cni-netd\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.701828 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-ovn-node-metrics-cert\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.701851 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-host-var-lib-cni-multus\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.701881 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8plh8\" (UniqueName: \"kubernetes.io/projected/7a742e7b-c420-46e3-9e96-e9c744af6124-kube-api-access-8plh8\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.701926 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-run-netns\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.701957 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-var-lib-openvswitch\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.701991 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-env-overrides\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.707997 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191ad5d41024c88c1a4fbc30f307dd6340fd55b93b00d22f62209d0e82be286f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf2a4d8be409a46ffe5702797e56a80023a72a19fb7dc5c49e5f4984cedc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:56Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.719987 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:56Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.734439 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad7b59f9-beb7-49d6-a2d1-e29133e46854\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v2l7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:56Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.747574 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191ad5d41024c88c1a4fbc30f307dd6340fd55b93b00d22f62209d0e82be286f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf2a4d8be409a46ffe5702797e56a80023a72a19fb7dc5c49e5f4984cedc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:56Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.760933 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:56Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.773800 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:56Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.783472 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:56Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.796061 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq622" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a742e7b-c420-46e3-9e96-e9c744af6124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8plh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq622\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:56Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.803547 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.803600 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-host-run-multus-certs\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.803623 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-run-ovn\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.803643 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-log-socket\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.803675 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-ovnkube-script-lib\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.803695 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-multus-socket-dir-parent\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.803714 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-host-run-netns\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.803735 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-host-var-lib-kubelet\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.803754 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-multus-conf-dir\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.803747 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.803799 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-host-run-netns\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.803795 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-host-run-multus-certs\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.803777 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/63c75ede-5170-4db0-811b-5217ef8d72b3-mcd-auth-proxy-config\") pod \"machine-config-daemon-xm9cq\" (UID: \"63c75ede-5170-4db0-811b-5217ef8d72b3\") " pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.803758 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-run-ovn\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.803877 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-log-socket\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.803891 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-multus-conf-dir\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.803886 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-host-var-lib-kubelet\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.803956 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-multus-socket-dir-parent\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804050 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-slash\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804089 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-cni-netd\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804116 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-ovn-node-metrics-cert\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804132 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-slash\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804156 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-cni-netd\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804166 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-host-var-lib-cni-multus\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804139 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-host-var-lib-cni-multus\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804209 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8plh8\" (UniqueName: \"kubernetes.io/projected/7a742e7b-c420-46e3-9e96-e9c744af6124-kube-api-access-8plh8\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804259 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-run-netns\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804281 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-var-lib-openvswitch\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804301 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-env-overrides\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804329 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-systemd-units\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804331 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-var-lib-openvswitch\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804356 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-etc-kubernetes\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804385 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm9x6\" (UniqueName: \"kubernetes.io/projected/ad7b59f9-beb7-49d6-a2d1-e29133e46854-kube-api-access-zm9x6\") pod \"multus-additional-cni-plugins-v2l7v\" (UID: \"ad7b59f9-beb7-49d6-a2d1-e29133e46854\") " pod="openshift-multus/multus-additional-cni-plugins-v2l7v" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804422 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/63c75ede-5170-4db0-811b-5217ef8d72b3-proxy-tls\") pod \"machine-config-daemon-xm9cq\" (UID: \"63c75ede-5170-4db0-811b-5217ef8d72b3\") " pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804421 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-systemd-units\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804440 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-ovnkube-config\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804475 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-run-openvswitch\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804503 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7a742e7b-c420-46e3-9e96-e9c744af6124-multus-daemon-config\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804502 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-etc-kubernetes\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804573 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ad7b59f9-beb7-49d6-a2d1-e29133e46854-cnibin\") pod \"multus-additional-cni-plugins-v2l7v\" (UID: \"ad7b59f9-beb7-49d6-a2d1-e29133e46854\") " pod="openshift-multus/multus-additional-cni-plugins-v2l7v" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804546 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ad7b59f9-beb7-49d6-a2d1-e29133e46854-cnibin\") pod \"multus-additional-cni-plugins-v2l7v\" (UID: \"ad7b59f9-beb7-49d6-a2d1-e29133e46854\") " pod="openshift-multus/multus-additional-cni-plugins-v2l7v" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804628 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-run-systemd\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804653 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-multus-cni-dir\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804672 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-os-release\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804716 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-kubelet\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804733 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-node-log\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804748 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-hostroot\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804773 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-cni-bin\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804790 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kdbz\" (UniqueName: \"kubernetes.io/projected/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-kube-api-access-9kdbz\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804808 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-cnibin\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804826 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ad7b59f9-beb7-49d6-a2d1-e29133e46854-system-cni-dir\") pod \"multus-additional-cni-plugins-v2l7v\" (UID: \"ad7b59f9-beb7-49d6-a2d1-e29133e46854\") " pod="openshift-multus/multus-additional-cni-plugins-v2l7v" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804842 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ad7b59f9-beb7-49d6-a2d1-e29133e46854-cni-binary-copy\") pod \"multus-additional-cni-plugins-v2l7v\" (UID: \"ad7b59f9-beb7-49d6-a2d1-e29133e46854\") " pod="openshift-multus/multus-additional-cni-plugins-v2l7v" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804862 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ad7b59f9-beb7-49d6-a2d1-e29133e46854-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-v2l7v\" (UID: \"ad7b59f9-beb7-49d6-a2d1-e29133e46854\") " pod="openshift-multus/multus-additional-cni-plugins-v2l7v" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804874 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/63c75ede-5170-4db0-811b-5217ef8d72b3-mcd-auth-proxy-config\") pod \"machine-config-daemon-xm9cq\" (UID: \"63c75ede-5170-4db0-811b-5217ef8d72b3\") " pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804884 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-system-cni-dir\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804894 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-ovnkube-script-lib\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804908 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-env-overrides\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804877 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-run-openvswitch\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804905 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ad7b59f9-beb7-49d6-a2d1-e29133e46854-tuning-conf-dir\") pod \"multus-additional-cni-plugins-v2l7v\" (UID: \"ad7b59f9-beb7-49d6-a2d1-e29133e46854\") " pod="openshift-multus/multus-additional-cni-plugins-v2l7v" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804962 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-cni-bin\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804963 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ad7b59f9-beb7-49d6-a2d1-e29133e46854-system-cni-dir\") pod \"multus-additional-cni-plugins-v2l7v\" (UID: \"ad7b59f9-beb7-49d6-a2d1-e29133e46854\") " pod="openshift-multus/multus-additional-cni-plugins-v2l7v" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.804983 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/63c75ede-5170-4db0-811b-5217ef8d72b3-rootfs\") pod \"machine-config-daemon-xm9cq\" (UID: \"63c75ede-5170-4db0-811b-5217ef8d72b3\") " pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.805012 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/63c75ede-5170-4db0-811b-5217ef8d72b3-rootfs\") pod \"machine-config-daemon-xm9cq\" (UID: \"63c75ede-5170-4db0-811b-5217ef8d72b3\") " pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.805029 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96lw2\" (UniqueName: \"kubernetes.io/projected/63c75ede-5170-4db0-811b-5217ef8d72b3-kube-api-access-96lw2\") pod \"machine-config-daemon-xm9cq\" (UID: \"63c75ede-5170-4db0-811b-5217ef8d72b3\") " pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.805060 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-host-run-k8s-cni-cncf-io\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.805069 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-cnibin\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.805104 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-kubelet\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.805105 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-etc-openvswitch\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.805131 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-run-systemd\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.805137 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-run-ovn-kubernetes\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.805157 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-hostroot\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.805168 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7a742e7b-c420-46e3-9e96-e9c744af6124-cni-binary-copy\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.805214 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-host-var-lib-cni-bin\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.805243 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ad7b59f9-beb7-49d6-a2d1-e29133e46854-os-release\") pod \"multus-additional-cni-plugins-v2l7v\" (UID: \"ad7b59f9-beb7-49d6-a2d1-e29133e46854\") " pod="openshift-multus/multus-additional-cni-plugins-v2l7v" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.805332 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-multus-cni-dir\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.805600 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7a742e7b-c420-46e3-9e96-e9c744af6124-multus-daemon-config\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.805623 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-ovnkube-config\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.805623 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ad7b59f9-beb7-49d6-a2d1-e29133e46854-os-release\") pod \"multus-additional-cni-plugins-v2l7v\" (UID: \"ad7b59f9-beb7-49d6-a2d1-e29133e46854\") " pod="openshift-multus/multus-additional-cni-plugins-v2l7v" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.805630 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-os-release\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.805651 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-host-run-k8s-cni-cncf-io\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.805661 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-node-log\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.805702 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-host-var-lib-cni-bin\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.805746 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7a742e7b-c420-46e3-9e96-e9c744af6124-system-cni-dir\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.805769 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-etc-openvswitch\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.805781 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7a742e7b-c420-46e3-9e96-e9c744af6124-cni-binary-copy\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.805983 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ad7b59f9-beb7-49d6-a2d1-e29133e46854-tuning-conf-dir\") pod \"multus-additional-cni-plugins-v2l7v\" (UID: \"ad7b59f9-beb7-49d6-a2d1-e29133e46854\") " pod="openshift-multus/multus-additional-cni-plugins-v2l7v" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.805794 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-run-ovn-kubernetes\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.806297 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ad7b59f9-beb7-49d6-a2d1-e29133e46854-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-v2l7v\" (UID: \"ad7b59f9-beb7-49d6-a2d1-e29133e46854\") " pod="openshift-multus/multus-additional-cni-plugins-v2l7v" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.806401 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ad7b59f9-beb7-49d6-a2d1-e29133e46854-cni-binary-copy\") pod \"multus-additional-cni-plugins-v2l7v\" (UID: \"ad7b59f9-beb7-49d6-a2d1-e29133e46854\") " pod="openshift-multus/multus-additional-cni-plugins-v2l7v" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.806472 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-run-netns\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.807648 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:56Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.812259 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-ovn-node-metrics-cert\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.815129 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/63c75ede-5170-4db0-811b-5217ef8d72b3-proxy-tls\") pod \"machine-config-daemon-xm9cq\" (UID: \"63c75ede-5170-4db0-811b-5217ef8d72b3\") " pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.821547 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm9x6\" (UniqueName: \"kubernetes.io/projected/ad7b59f9-beb7-49d6-a2d1-e29133e46854-kube-api-access-zm9x6\") pod \"multus-additional-cni-plugins-v2l7v\" (UID: \"ad7b59f9-beb7-49d6-a2d1-e29133e46854\") " pod="openshift-multus/multus-additional-cni-plugins-v2l7v" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.822432 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96lw2\" (UniqueName: \"kubernetes.io/projected/63c75ede-5170-4db0-811b-5217ef8d72b3-kube-api-access-96lw2\") pod \"machine-config-daemon-xm9cq\" (UID: \"63c75ede-5170-4db0-811b-5217ef8d72b3\") " pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.826385 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kdbz\" (UniqueName: \"kubernetes.io/projected/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-kube-api-access-9kdbz\") pod \"ovnkube-node-pxh94\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.827042 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8plh8\" (UniqueName: \"kubernetes.io/projected/7a742e7b-c420-46e3-9e96-e9c744af6124-kube-api-access-8plh8\") pod \"multus-rq622\" (UID: \"7a742e7b-c420-46e3-9e96-e9c744af6124\") " pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.827057 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad7b59f9-beb7-49d6-a2d1-e29133e46854\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v2l7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:56Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.851799 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:56Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.863697 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63c75ede-5170-4db0-811b-5217ef8d72b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xm9cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:56Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.873006 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af4930ff9cebd62106545fb3da2dfc93f7b591426fe4c85aa2e637b60c935f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:56Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.893673 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c09748c0963bdb50c9552f249d1135ea53e68c52babd788dad050951cb849cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:56Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.905434 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:56Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.922720 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.929847 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.937861 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" Jan 26 16:58:56 crc kubenswrapper[4856]: I0126 16:58:56.944856 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rq622" Jan 26 16:58:56 crc kubenswrapper[4856]: W0126 16:58:56.964612 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63c75ede_5170_4db0_811b_5217ef8d72b3.slice/crio-a4deba88fd2726a90167401cf17b82783a9a1e76e5ef43e68433893ec3aaa466 WatchSource:0}: Error finding container a4deba88fd2726a90167401cf17b82783a9a1e76e5ef43e68433893ec3aaa466: Status 404 returned error can't find the container with id a4deba88fd2726a90167401cf17b82783a9a1e76e5ef43e68433893ec3aaa466 Jan 26 16:58:57 crc kubenswrapper[4856]: I0126 16:58:57.459672 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 13:10:15.341683813 +0000 UTC Jan 26 16:58:57 crc kubenswrapper[4856]: I0126 16:58:57.854500 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 16:58:57 crc kubenswrapper[4856]: I0126 16:58:57.856006 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:57 crc kubenswrapper[4856]: I0126 16:58:57.856043 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:57 crc kubenswrapper[4856]: I0126 16:58:57.856053 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:57 crc kubenswrapper[4856]: I0126 16:58:57.856151 4856 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 16:58:57 crc kubenswrapper[4856]: I0126 16:58:57.873056 4856 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 26 16:58:57 crc kubenswrapper[4856]: I0126 16:58:57.873332 4856 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 26 16:58:57 crc kubenswrapper[4856]: I0126 16:58:57.874553 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:57 crc kubenswrapper[4856]: I0126 16:58:57.874584 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:57 crc kubenswrapper[4856]: I0126 16:58:57.874593 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:57 crc kubenswrapper[4856]: I0126 16:58:57.874608 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:58:57 crc kubenswrapper[4856]: I0126 16:58:57.874626 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:58:57Z","lastTransitionTime":"2026-01-26T16:58:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:58:57 crc kubenswrapper[4856]: I0126 16:58:57.885618 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rq622" event={"ID":"7a742e7b-c420-46e3-9e96-e9c744af6124","Type":"ContainerStarted","Data":"ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191"} Jan 26 16:58:57 crc kubenswrapper[4856]: I0126 16:58:57.885662 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rq622" event={"ID":"7a742e7b-c420-46e3-9e96-e9c744af6124","Type":"ContainerStarted","Data":"e0f423c54a0c6586c0aecb6586b15b46e04f0d134b3d8019b80ae941f2d59917"} Jan 26 16:58:57 crc kubenswrapper[4856]: I0126 16:58:57.887173 4856 generic.go:334] "Generic (PLEG): container finished" podID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerID="d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8" exitCode=0 Jan 26 16:58:57 crc kubenswrapper[4856]: I0126 16:58:57.887238 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" event={"ID":"ab5b6f50-172b-4535-a0f9-5d103bcab4e7","Type":"ContainerDied","Data":"d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8"} Jan 26 16:58:57 crc kubenswrapper[4856]: I0126 16:58:57.887282 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" event={"ID":"ab5b6f50-172b-4535-a0f9-5d103bcab4e7","Type":"ContainerStarted","Data":"a1b2fe845f0957cc37219c78a754b5c2b9acc25bf2ef8f7083ca734c4c5c68b9"} Jan 26 16:58:57 crc kubenswrapper[4856]: I0126 16:58:57.888912 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" event={"ID":"63c75ede-5170-4db0-811b-5217ef8d72b3","Type":"ContainerStarted","Data":"da26ddcfc6ccde3c9aabd63bdef3435f9a9eaab8c095bfeef5670c1295576cb0"} Jan 26 16:58:57 crc kubenswrapper[4856]: I0126 16:58:57.888973 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" event={"ID":"63c75ede-5170-4db0-811b-5217ef8d72b3","Type":"ContainerStarted","Data":"54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18"} Jan 26 16:58:57 crc kubenswrapper[4856]: I0126 16:58:57.888984 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" event={"ID":"63c75ede-5170-4db0-811b-5217ef8d72b3","Type":"ContainerStarted","Data":"a4deba88fd2726a90167401cf17b82783a9a1e76e5ef43e68433893ec3aaa466"} Jan 26 16:58:57 crc kubenswrapper[4856]: I0126 16:58:57.894068 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" event={"ID":"ad7b59f9-beb7-49d6-a2d1-e29133e46854","Type":"ContainerStarted","Data":"e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363"} Jan 26 16:58:57 crc kubenswrapper[4856]: I0126 16:58:57.894113 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" event={"ID":"ad7b59f9-beb7-49d6-a2d1-e29133e46854","Type":"ContainerStarted","Data":"90be437457a8690be4cf46ab14042d606cac05b13668a8e2661760045afcc8d8"} Jan 26 16:58:57 crc kubenswrapper[4856]: I0126 16:58:57.907702 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af4930ff9cebd62106545fb3da2dfc93f7b591426fe4c85aa2e637b60c935f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:57Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:57 crc kubenswrapper[4856]: E0126 16:58:57.917517 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17523591-a778-4a97-aeab-8a7a93101850\\\",\\\"systemUUID\\\":\\\"ca45d056-99cb-4442-8a44-7e899628ecb2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:57Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:57 crc kubenswrapper[4856]: I0126 16:58:57.922599 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:57 crc kubenswrapper[4856]: I0126 16:58:57.922636 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:57 crc kubenswrapper[4856]: I0126 16:58:57.922646 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:57 crc kubenswrapper[4856]: I0126 16:58:57.922661 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:58:57 crc kubenswrapper[4856]: I0126 16:58:57.922671 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:58:57Z","lastTransitionTime":"2026-01-26T16:58:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:58:57 crc kubenswrapper[4856]: I0126 16:58:57.968186 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c09748c0963bdb50c9552f249d1135ea53e68c52babd788dad050951cb849cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:57Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:57 crc kubenswrapper[4856]: E0126 16:58:57.982122 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17523591-a778-4a97-aeab-8a7a93101850\\\",\\\"systemUUID\\\":\\\"ca45d056-99cb-4442-8a44-7e899628ecb2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:57Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.091022 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.091080 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.091092 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.091110 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.091123 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:58:58Z","lastTransitionTime":"2026-01-26T16:58:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.095509 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:58Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:58 crc kubenswrapper[4856]: E0126 16:58:58.177586 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17523591-a778-4a97-aeab-8a7a93101850\\\",\\\"systemUUID\\\":\\\"ca45d056-99cb-4442-8a44-7e899628ecb2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:58Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.219246 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.219667 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.219751 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.219823 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.219904 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:58:58Z","lastTransitionTime":"2026-01-26T16:58:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.230906 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:58Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.250309 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:58Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:58 crc kubenswrapper[4856]: E0126 16:58:58.252100 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17523591-a778-4a97-aeab-8a7a93101850\\\",\\\"systemUUID\\\":\\\"ca45d056-99cb-4442-8a44-7e899628ecb2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:58Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.262587 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.262917 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.263002 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.263078 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.263136 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:58:58Z","lastTransitionTime":"2026-01-26T16:58:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:58:58 crc kubenswrapper[4856]: E0126 16:58:58.292370 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17523591-a778-4a97-aeab-8a7a93101850\\\",\\\"systemUUID\\\":\\\"ca45d056-99cb-4442-8a44-7e899628ecb2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:58Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:58 crc kubenswrapper[4856]: E0126 16:58:58.292922 4856 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.294586 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.294627 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.294635 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.294647 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.294658 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:58:58Z","lastTransitionTime":"2026-01-26T16:58:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.295659 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq622" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a742e7b-c420-46e3-9e96-e9c744af6124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8plh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq622\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:58Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.335925 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191ad5d41024c88c1a4fbc30f307dd6340fd55b93b00d22f62209d0e82be286f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf2a4d8be409a46ffe5702797e56a80023a72a19fb7dc5c49e5f4984cedc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:58Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.362283 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:58Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.376288 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:58Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.386704 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.386949 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.387070 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:58:58 crc kubenswrapper[4856]: E0126 16:58:58.387192 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:59:02.387158329 +0000 UTC m=+38.340412340 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.387247 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.387305 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:58:58 crc kubenswrapper[4856]: E0126 16:58:58.387423 4856 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 16:58:58 crc kubenswrapper[4856]: E0126 16:58:58.387471 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 16:59:02.387461918 +0000 UTC m=+38.340715909 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 16:58:58 crc kubenswrapper[4856]: E0126 16:58:58.387581 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 16:58:58 crc kubenswrapper[4856]: E0126 16:58:58.387600 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 16:58:58 crc kubenswrapper[4856]: E0126 16:58:58.387614 4856 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:58:58 crc kubenswrapper[4856]: E0126 16:58:58.387643 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 16:59:02.387634652 +0000 UTC m=+38.340888723 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:58:58 crc kubenswrapper[4856]: E0126 16:58:58.388020 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 16:58:58 crc kubenswrapper[4856]: E0126 16:58:58.388108 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 16:58:58 crc kubenswrapper[4856]: E0126 16:58:58.388184 4856 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:58:58 crc kubenswrapper[4856]: E0126 16:58:58.388228 4856 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 16:58:58 crc kubenswrapper[4856]: E0126 16:58:58.388383 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 16:59:02.38828727 +0000 UTC m=+38.341541341 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:58:58 crc kubenswrapper[4856]: E0126 16:58:58.388487 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 16:59:02.388464065 +0000 UTC m=+38.341718086 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.393839 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad7b59f9-beb7-49d6-a2d1-e29133e46854\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v2l7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:58Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.394092 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:58:58 crc kubenswrapper[4856]: E0126 16:58:58.394171 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.394216 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:58:58 crc kubenswrapper[4856]: E0126 16:58:58.394259 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.394305 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:58:58 crc kubenswrapper[4856]: E0126 16:58:58.394359 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.398405 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.398454 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.398465 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.398483 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.398495 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:58:58Z","lastTransitionTime":"2026-01-26T16:58:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.423083 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:58Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.460024 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 03:01:17.974540121 +0000 UTC Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.472479 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63c75ede-5170-4db0-811b-5217ef8d72b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xm9cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:58Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.486991 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63c75ede-5170-4db0-811b-5217ef8d72b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da26ddcfc6ccde3c9aabd63bdef3435f9a9eaab8c095bfeef5670c1295576cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xm9cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:58Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.500833 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.500876 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.500896 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.500912 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.500921 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:58:58Z","lastTransitionTime":"2026-01-26T16:58:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.524161 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af4930ff9cebd62106545fb3da2dfc93f7b591426fe4c85aa2e637b60c935f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:58Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.603129 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.603169 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.603182 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.603197 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.603208 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:58:58Z","lastTransitionTime":"2026-01-26T16:58:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.612324 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c09748c0963bdb50c9552f249d1135ea53e68c52babd788dad050951cb849cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:58Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.627992 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:58Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.645065 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:58Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.657964 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:58Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.676386 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq622" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a742e7b-c420-46e3-9e96-e9c744af6124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8plh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq622\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:58Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.691939 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191ad5d41024c88c1a4fbc30f307dd6340fd55b93b00d22f62209d0e82be286f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf2a4d8be409a46ffe5702797e56a80023a72a19fb7dc5c49e5f4984cedc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:58Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.705488 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.705726 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.705745 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.705764 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.705774 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:58:58Z","lastTransitionTime":"2026-01-26T16:58:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.708076 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:58Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.723497 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:58Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.744931 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad7b59f9-beb7-49d6-a2d1-e29133e46854\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v2l7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:58Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.771409 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:58Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.819810 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.819845 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.819857 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.819874 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.819886 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:58:58Z","lastTransitionTime":"2026-01-26T16:58:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.902761 4856 generic.go:334] "Generic (PLEG): container finished" podID="ad7b59f9-beb7-49d6-a2d1-e29133e46854" containerID="e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363" exitCode=0 Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.902883 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" event={"ID":"ad7b59f9-beb7-49d6-a2d1-e29133e46854","Type":"ContainerDied","Data":"e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363"} Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.916923 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" event={"ID":"ab5b6f50-172b-4535-a0f9-5d103bcab4e7","Type":"ContainerStarted","Data":"4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7"} Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.916987 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" event={"ID":"ab5b6f50-172b-4535-a0f9-5d103bcab4e7","Type":"ContainerStarted","Data":"b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde"} Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.917004 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" event={"ID":"ab5b6f50-172b-4535-a0f9-5d103bcab4e7","Type":"ContainerStarted","Data":"7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3"} Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.917017 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" event={"ID":"ab5b6f50-172b-4535-a0f9-5d103bcab4e7","Type":"ContainerStarted","Data":"e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1"} Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.917029 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" event={"ID":"ab5b6f50-172b-4535-a0f9-5d103bcab4e7","Type":"ContainerStarted","Data":"83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc"} Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.917042 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" event={"ID":"ab5b6f50-172b-4535-a0f9-5d103bcab4e7","Type":"ContainerStarted","Data":"25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b"} Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.918150 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af4930ff9cebd62106545fb3da2dfc93f7b591426fe4c85aa2e637b60c935f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:58Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.928401 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.928457 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.928470 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.928494 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.928507 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:58:58Z","lastTransitionTime":"2026-01-26T16:58:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:58:58 crc kubenswrapper[4856]: I0126 16:58:58.932089 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c09748c0963bdb50c9552f249d1135ea53e68c52babd788dad050951cb849cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:58Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.031060 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.031110 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.031118 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.031133 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.031142 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:58:59Z","lastTransitionTime":"2026-01-26T16:58:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.065310 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.080726 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191ad5d41024c88c1a4fbc30f307dd6340fd55b93b00d22f62209d0e82be286f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf2a4d8be409a46ffe5702797e56a80023a72a19fb7dc5c49e5f4984cedc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.100906 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.127799 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.141930 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.141969 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.141977 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.141994 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.142004 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:58:59Z","lastTransitionTime":"2026-01-26T16:58:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.151549 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.172181 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq622" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a742e7b-c420-46e3-9e96-e9c744af6124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8plh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq622\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.208498 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.221260 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad7b59f9-beb7-49d6-a2d1-e29133e46854\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v2l7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.240864 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.244834 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.244858 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.244865 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.244879 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.244889 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:58:59Z","lastTransitionTime":"2026-01-26T16:58:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.253913 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63c75ede-5170-4db0-811b-5217ef8d72b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da26ddcfc6ccde3c9aabd63bdef3435f9a9eaab8c095bfeef5670c1295576cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xm9cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.347931 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.348374 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.348387 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.348406 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.348419 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:58:59Z","lastTransitionTime":"2026-01-26T16:58:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.451431 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.451461 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.451472 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.451486 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.451496 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:58:59Z","lastTransitionTime":"2026-01-26T16:58:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.460145 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 13:25:07.820004316 +0000 UTC Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.553263 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.553285 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.553292 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.553306 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.553315 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:58:59Z","lastTransitionTime":"2026-01-26T16:58:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.655679 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.655705 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.655713 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.655727 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.655736 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:58:59Z","lastTransitionTime":"2026-01-26T16:58:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.757845 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.757889 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.757900 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.757917 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.757929 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:58:59Z","lastTransitionTime":"2026-01-26T16:58:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.859277 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.859302 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.859310 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.859323 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.859331 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:58:59Z","lastTransitionTime":"2026-01-26T16:58:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.947561 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" event={"ID":"ad7b59f9-beb7-49d6-a2d1-e29133e46854","Type":"ContainerStarted","Data":"fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196"} Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.960005 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63c75ede-5170-4db0-811b-5217ef8d72b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da26ddcfc6ccde3c9aabd63bdef3435f9a9eaab8c095bfeef5670c1295576cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xm9cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.961469 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.961515 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.961545 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.961563 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.961574 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:58:59Z","lastTransitionTime":"2026-01-26T16:58:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.973439 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af4930ff9cebd62106545fb3da2dfc93f7b591426fe4c85aa2e637b60c935f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.984173 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c09748c0963bdb50c9552f249d1135ea53e68c52babd788dad050951cb849cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:58:59 crc kubenswrapper[4856]: I0126 16:58:59.997036 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:58:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.010227 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191ad5d41024c88c1a4fbc30f307dd6340fd55b93b00d22f62209d0e82be286f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf2a4d8be409a46ffe5702797e56a80023a72a19fb7dc5c49e5f4984cedc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.028695 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.041789 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.052414 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.097867 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq622" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a742e7b-c420-46e3-9e96-e9c744af6124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8plh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq622\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.114748 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.114791 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.114801 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.114816 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.114826 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:00Z","lastTransitionTime":"2026-01-26T16:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.122279 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad7b59f9-beb7-49d6-a2d1-e29133e46854\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v2l7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.140780 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.156672 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.217658 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.217684 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.217692 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.217705 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.217714 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:00Z","lastTransitionTime":"2026-01-26T16:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.320225 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.320263 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.320274 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.320290 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.320305 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:00Z","lastTransitionTime":"2026-01-26T16:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.394749 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.394811 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.394767 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:59:00 crc kubenswrapper[4856]: E0126 16:59:00.394894 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:59:00 crc kubenswrapper[4856]: E0126 16:59:00.395004 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:59:00 crc kubenswrapper[4856]: E0126 16:59:00.395078 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.423254 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.423292 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.423301 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.423318 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.423334 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:00Z","lastTransitionTime":"2026-01-26T16:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.460841 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 12:53:02.635599378 +0000 UTC Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.525868 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.525904 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.525912 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.525926 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.525938 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:00Z","lastTransitionTime":"2026-01-26T16:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.628710 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.629042 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.629056 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.629075 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.629087 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:00Z","lastTransitionTime":"2026-01-26T16:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.731918 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.731988 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.732001 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.732019 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.732031 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:00Z","lastTransitionTime":"2026-01-26T16:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.834459 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.834518 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.834556 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.834583 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.834601 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:00Z","lastTransitionTime":"2026-01-26T16:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.937271 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.937312 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.937324 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.937346 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.937359 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:00Z","lastTransitionTime":"2026-01-26T16:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.951553 4856 generic.go:334] "Generic (PLEG): container finished" podID="ad7b59f9-beb7-49d6-a2d1-e29133e46854" containerID="fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196" exitCode=0 Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.951610 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" event={"ID":"ad7b59f9-beb7-49d6-a2d1-e29133e46854","Type":"ContainerDied","Data":"fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196"} Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.964575 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63c75ede-5170-4db0-811b-5217ef8d72b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da26ddcfc6ccde3c9aabd63bdef3435f9a9eaab8c095bfeef5670c1295576cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xm9cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.984675 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af4930ff9cebd62106545fb3da2dfc93f7b591426fe4c85aa2e637b60c935f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:00 crc kubenswrapper[4856]: I0126 16:59:00.996934 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c09748c0963bdb50c9552f249d1135ea53e68c52babd788dad050951cb849cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:00Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.011635 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:01Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.023505 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:01Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.036820 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq622" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a742e7b-c420-46e3-9e96-e9c744af6124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8plh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq622\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:01Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.041848 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.041878 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.041886 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.041901 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.041911 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:01Z","lastTransitionTime":"2026-01-26T16:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.051369 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191ad5d41024c88c1a4fbc30f307dd6340fd55b93b00d22f62209d0e82be286f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf2a4d8be409a46ffe5702797e56a80023a72a19fb7dc5c49e5f4984cedc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:01Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.064663 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:01Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.080151 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:01Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.097142 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:01Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.111801 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad7b59f9-beb7-49d6-a2d1-e29133e46854\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v2l7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:01Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.131191 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:01Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.144623 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.144660 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.144672 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.144689 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.144701 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:01Z","lastTransitionTime":"2026-01-26T16:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.246664 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.246702 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.246711 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.246728 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.246740 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:01Z","lastTransitionTime":"2026-01-26T16:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.349610 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.349653 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.349665 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.349695 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.349723 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:01Z","lastTransitionTime":"2026-01-26T16:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.451780 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.451831 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.451852 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.451873 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.451896 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:01Z","lastTransitionTime":"2026-01-26T16:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.461158 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 06:39:35.931342519 +0000 UTC Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.554456 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.554703 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.554712 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.554726 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.554734 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:01Z","lastTransitionTime":"2026-01-26T16:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.658989 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.659015 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.659026 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.659042 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.659052 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:01Z","lastTransitionTime":"2026-01-26T16:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.762580 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.762638 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.762653 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.762676 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.762692 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:01Z","lastTransitionTime":"2026-01-26T16:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.865089 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.865133 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.865144 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.865162 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.865172 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:01Z","lastTransitionTime":"2026-01-26T16:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.958396 4856 generic.go:334] "Generic (PLEG): container finished" podID="ad7b59f9-beb7-49d6-a2d1-e29133e46854" containerID="79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993" exitCode=0 Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.958455 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" event={"ID":"ad7b59f9-beb7-49d6-a2d1-e29133e46854","Type":"ContainerDied","Data":"79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993"} Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.969841 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.969880 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.969891 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.969909 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.969918 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:01Z","lastTransitionTime":"2026-01-26T16:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.971038 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63c75ede-5170-4db0-811b-5217ef8d72b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da26ddcfc6ccde3c9aabd63bdef3435f9a9eaab8c095bfeef5670c1295576cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xm9cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:01Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.971975 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" event={"ID":"ab5b6f50-172b-4535-a0f9-5d103bcab4e7","Type":"ContainerStarted","Data":"11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a"} Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.983451 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"c823748751e9938f10fab08e33b5fffff5a6d15961ce028204131b7b69a56c18"} Jan 26 16:59:01 crc kubenswrapper[4856]: I0126 16:59:01.987055 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c09748c0963bdb50c9552f249d1135ea53e68c52babd788dad050951cb849cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:01Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.006971 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:02Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.022167 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af4930ff9cebd62106545fb3da2dfc93f7b591426fe4c85aa2e637b60c935f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:02Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.035936 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191ad5d41024c88c1a4fbc30f307dd6340fd55b93b00d22f62209d0e82be286f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf2a4d8be409a46ffe5702797e56a80023a72a19fb7dc5c49e5f4984cedc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:02Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.049148 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:02Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.062314 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:02Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.072639 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.072668 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.072679 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.072693 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.072703 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:02Z","lastTransitionTime":"2026-01-26T16:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.073881 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:02Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.090197 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq622" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a742e7b-c420-46e3-9e96-e9c744af6124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8plh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq622\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:02Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.110726 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:02Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.124102 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:02Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.142777 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad7b59f9-beb7-49d6-a2d1-e29133e46854\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v2l7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:02Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.159444 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad7b59f9-beb7-49d6-a2d1-e29133e46854\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v2l7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:02Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.178792 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:02Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.185337 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.185373 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.185384 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.185399 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.185409 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:02Z","lastTransitionTime":"2026-01-26T16:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.193465 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:02Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.208065 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63c75ede-5170-4db0-811b-5217ef8d72b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da26ddcfc6ccde3c9aabd63bdef3435f9a9eaab8c095bfeef5670c1295576cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xm9cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:02Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.220353 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af4930ff9cebd62106545fb3da2dfc93f7b591426fe4c85aa2e637b60c935f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:02Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.234964 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c09748c0963bdb50c9552f249d1135ea53e68c52babd788dad050951cb849cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:02Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.250263 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:02Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.270693 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191ad5d41024c88c1a4fbc30f307dd6340fd55b93b00d22f62209d0e82be286f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf2a4d8be409a46ffe5702797e56a80023a72a19fb7dc5c49e5f4984cedc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:02Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.284494 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c823748751e9938f10fab08e33b5fffff5a6d15961ce028204131b7b69a56c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:02Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.288036 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.288072 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.288081 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.288096 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.288106 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:02Z","lastTransitionTime":"2026-01-26T16:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.297279 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:02Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.307478 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:02Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.319603 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq622" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a742e7b-c420-46e3-9e96-e9c744af6124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8plh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq622\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:02Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.390171 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.390225 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.390236 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.390255 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.390274 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:02Z","lastTransitionTime":"2026-01-26T16:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.394373 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.394399 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.394402 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:59:02 crc kubenswrapper[4856]: E0126 16:59:02.394474 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:59:02 crc kubenswrapper[4856]: E0126 16:59:02.394567 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:59:02 crc kubenswrapper[4856]: E0126 16:59:02.394618 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.461591 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 04:40:00.515652777 +0000 UTC Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.479269 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.479503 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:59:02 crc kubenswrapper[4856]: E0126 16:59:02.479554 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:59:10.479496688 +0000 UTC m=+46.432750669 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:59:02 crc kubenswrapper[4856]: E0126 16:59:02.479668 4856 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 16:59:02 crc kubenswrapper[4856]: E0126 16:59:02.479876 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 16:59:02 crc kubenswrapper[4856]: E0126 16:59:02.479904 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 16:59:02 crc kubenswrapper[4856]: E0126 16:59:02.479924 4856 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:59:02 crc kubenswrapper[4856]: E0126 16:59:02.479883 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 16:59:10.479869739 +0000 UTC m=+46.433123950 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 16:59:02 crc kubenswrapper[4856]: E0126 16:59:02.480015 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 16:59:10.479992532 +0000 UTC m=+46.433246513 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.479746 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.480071 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.480102 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:59:02 crc kubenswrapper[4856]: E0126 16:59:02.480244 4856 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 16:59:02 crc kubenswrapper[4856]: E0126 16:59:02.480280 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 16:59:10.48027272 +0000 UTC m=+46.433526701 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 16:59:02 crc kubenswrapper[4856]: E0126 16:59:02.480405 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 16:59:02 crc kubenswrapper[4856]: E0126 16:59:02.480467 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 16:59:02 crc kubenswrapper[4856]: E0126 16:59:02.480485 4856 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:59:02 crc kubenswrapper[4856]: E0126 16:59:02.480605 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 16:59:10.480570738 +0000 UTC m=+46.433824719 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.492557 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.492622 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.492647 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.492677 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.492702 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:02Z","lastTransitionTime":"2026-01-26T16:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.595657 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.595711 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.595723 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.595742 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.595756 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:02Z","lastTransitionTime":"2026-01-26T16:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.698708 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.698744 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.698754 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.698768 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.698777 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:02Z","lastTransitionTime":"2026-01-26T16:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.801263 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.801315 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.801325 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.801341 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.801350 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:02Z","lastTransitionTime":"2026-01-26T16:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.904491 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.904551 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.904564 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.904583 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.904595 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:02Z","lastTransitionTime":"2026-01-26T16:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.989089 4856 generic.go:334] "Generic (PLEG): container finished" podID="ad7b59f9-beb7-49d6-a2d1-e29133e46854" containerID="c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1" exitCode=0 Jan 26 16:59:02 crc kubenswrapper[4856]: I0126 16:59:02.989159 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" event={"ID":"ad7b59f9-beb7-49d6-a2d1-e29133e46854","Type":"ContainerDied","Data":"c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1"} Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.005069 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:03Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.007267 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.007309 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.007320 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.007338 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.007350 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:03Z","lastTransitionTime":"2026-01-26T16:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.022839 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad7b59f9-beb7-49d6-a2d1-e29133e46854\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v2l7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:03Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.043467 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:03Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.056794 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63c75ede-5170-4db0-811b-5217ef8d72b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da26ddcfc6ccde3c9aabd63bdef3435f9a9eaab8c095bfeef5670c1295576cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xm9cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:03Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.070876 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af4930ff9cebd62106545fb3da2dfc93f7b591426fe4c85aa2e637b60c935f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:03Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.084351 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c09748c0963bdb50c9552f249d1135ea53e68c52babd788dad050951cb849cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:03Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.099626 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:03Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.110754 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.110799 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.110856 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.110875 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.110887 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:03Z","lastTransitionTime":"2026-01-26T16:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.114249 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191ad5d41024c88c1a4fbc30f307dd6340fd55b93b00d22f62209d0e82be286f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf2a4d8be409a46ffe5702797e56a80023a72a19fb7dc5c49e5f4984cedc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:03Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.129196 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c823748751e9938f10fab08e33b5fffff5a6d15961ce028204131b7b69a56c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:03Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.142121 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:03Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.154826 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:03Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.170185 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq622" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a742e7b-c420-46e3-9e96-e9c744af6124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8plh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq622\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:03Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.214074 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.214111 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.214121 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.214136 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.214146 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:03Z","lastTransitionTime":"2026-01-26T16:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.316848 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.316891 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.316900 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.316917 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.316928 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:03Z","lastTransitionTime":"2026-01-26T16:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.420881 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.420932 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.420945 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.420962 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.420982 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:03Z","lastTransitionTime":"2026-01-26T16:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.461956 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 03:03:49.251194883 +0000 UTC Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.523855 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.523895 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.523905 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.523921 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.523931 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:03Z","lastTransitionTime":"2026-01-26T16:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.626874 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.626914 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.626928 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.626946 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.626959 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:03Z","lastTransitionTime":"2026-01-26T16:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.730618 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.730683 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.730699 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.730724 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.730743 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:03Z","lastTransitionTime":"2026-01-26T16:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.839284 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.839344 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.839364 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.839392 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.839410 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:03Z","lastTransitionTime":"2026-01-26T16:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.941654 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.941729 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.941755 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.941790 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.941814 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:03Z","lastTransitionTime":"2026-01-26T16:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.995417 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" event={"ID":"ad7b59f9-beb7-49d6-a2d1-e29133e46854","Type":"ContainerStarted","Data":"62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea"} Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.997866 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" event={"ID":"ab5b6f50-172b-4535-a0f9-5d103bcab4e7","Type":"ContainerStarted","Data":"36e1e2c0e79e963e1ad9b28e9b6e7d69c6b6df040359eef2630b4aeb32109f38"} Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.998621 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.998716 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:59:03 crc kubenswrapper[4856]: I0126 16:59:03.998734 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.016499 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191ad5d41024c88c1a4fbc30f307dd6340fd55b93b00d22f62209d0e82be286f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf2a4d8be409a46ffe5702797e56a80023a72a19fb7dc5c49e5f4984cedc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:04Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.030850 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c823748751e9938f10fab08e33b5fffff5a6d15961ce028204131b7b69a56c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:04Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.046351 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:04Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.062567 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:04Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.078703 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.078904 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.078920 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.078928 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.078941 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.078950 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:04Z","lastTransitionTime":"2026-01-26T16:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.079081 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq622" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a742e7b-c420-46e3-9e96-e9c744af6124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8plh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq622\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:04Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.091878 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:04Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.118291 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad7b59f9-beb7-49d6-a2d1-e29133e46854\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v2l7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:04Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.132132 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.135611 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:04Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.147682 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63c75ede-5170-4db0-811b-5217ef8d72b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da26ddcfc6ccde3c9aabd63bdef3435f9a9eaab8c095bfeef5670c1295576cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xm9cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:04Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.157823 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af4930ff9cebd62106545fb3da2dfc93f7b591426fe4c85aa2e637b60c935f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:04Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.174776 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c09748c0963bdb50c9552f249d1135ea53e68c52babd788dad050951cb849cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:04Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.180972 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.181016 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.181027 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.181046 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.181058 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:04Z","lastTransitionTime":"2026-01-26T16:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.194138 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:04Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.206186 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af4930ff9cebd62106545fb3da2dfc93f7b591426fe4c85aa2e637b60c935f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:04Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.225676 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c09748c0963bdb50c9552f249d1135ea53e68c52babd788dad050951cb849cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:04Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.242935 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:04Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.265374 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191ad5d41024c88c1a4fbc30f307dd6340fd55b93b00d22f62209d0e82be286f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf2a4d8be409a46ffe5702797e56a80023a72a19fb7dc5c49e5f4984cedc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:04Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.281471 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c823748751e9938f10fab08e33b5fffff5a6d15961ce028204131b7b69a56c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:04Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.283655 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.283708 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.283721 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.283742 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.283755 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:04Z","lastTransitionTime":"2026-01-26T16:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.295955 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:04Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.309272 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:04Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.327583 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq622" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a742e7b-c420-46e3-9e96-e9c744af6124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8plh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq622\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:04Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.340955 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:04Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.355477 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad7b59f9-beb7-49d6-a2d1-e29133e46854\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v2l7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:04Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.375865 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e1e2c0e79e963e1ad9b28e9b6e7d69c6b6df040359eef2630b4aeb32109f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:04Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.386125 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.386172 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.386183 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.386198 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.386208 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:04Z","lastTransitionTime":"2026-01-26T16:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.387779 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63c75ede-5170-4db0-811b-5217ef8d72b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da26ddcfc6ccde3c9aabd63bdef3435f9a9eaab8c095bfeef5670c1295576cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xm9cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:04Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.395025 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.395105 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:59:04 crc kubenswrapper[4856]: E0126 16:59:04.395159 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.395286 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:59:04 crc kubenswrapper[4856]: E0126 16:59:04.395438 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:59:04 crc kubenswrapper[4856]: E0126 16:59:04.395596 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.462608 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 15:18:50.096384286 +0000 UTC Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.488820 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.488864 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.488874 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.488891 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.488901 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:04Z","lastTransitionTime":"2026-01-26T16:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.592541 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.592589 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.592604 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.592623 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.592638 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:04Z","lastTransitionTime":"2026-01-26T16:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.700053 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.700090 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.700099 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.700114 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.700124 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:04Z","lastTransitionTime":"2026-01-26T16:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.802110 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.802138 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.802145 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.802157 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.802166 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:04Z","lastTransitionTime":"2026-01-26T16:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.904101 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.904157 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.904167 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.904184 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:04 crc kubenswrapper[4856]: I0126 16:59:04.904194 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:04Z","lastTransitionTime":"2026-01-26T16:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.028159 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.028192 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.028201 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.028215 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.028225 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:05Z","lastTransitionTime":"2026-01-26T16:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.132276 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.132317 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.132326 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.132343 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.132354 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:05Z","lastTransitionTime":"2026-01-26T16:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.235501 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.235571 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.235582 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.235599 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.235611 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:05Z","lastTransitionTime":"2026-01-26T16:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.337938 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.338264 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.338277 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.338290 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.338299 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:05Z","lastTransitionTime":"2026-01-26T16:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.440785 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.440822 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.440834 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.440856 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.440868 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:05Z","lastTransitionTime":"2026-01-26T16:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.462972 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 09:50:47.731554375 +0000 UTC Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.464939 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63c75ede-5170-4db0-811b-5217ef8d72b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da26ddcfc6ccde3c9aabd63bdef3435f9a9eaab8c095bfeef5670c1295576cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xm9cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:05Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.478660 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af4930ff9cebd62106545fb3da2dfc93f7b591426fe4c85aa2e637b60c935f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:05Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.493667 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c09748c0963bdb50c9552f249d1135ea53e68c52babd788dad050951cb849cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:05Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.508786 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:05Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.522418 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq622" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a742e7b-c420-46e3-9e96-e9c744af6124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8plh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq622\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:05Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.537485 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191ad5d41024c88c1a4fbc30f307dd6340fd55b93b00d22f62209d0e82be286f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf2a4d8be409a46ffe5702797e56a80023a72a19fb7dc5c49e5f4984cedc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:05Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.543208 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.543253 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.543267 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.543287 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.543300 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:05Z","lastTransitionTime":"2026-01-26T16:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.550392 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c823748751e9938f10fab08e33b5fffff5a6d15961ce028204131b7b69a56c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:05Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.567196 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:05Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.578581 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:05Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.595235 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:05Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.613276 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad7b59f9-beb7-49d6-a2d1-e29133e46854\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v2l7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:05Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.633400 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e1e2c0e79e963e1ad9b28e9b6e7d69c6b6df040359eef2630b4aeb32109f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:05Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.645749 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.646004 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.646068 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.646131 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.646209 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:05Z","lastTransitionTime":"2026-01-26T16:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.748876 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.748912 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.748923 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.748937 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.748946 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:05Z","lastTransitionTime":"2026-01-26T16:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.851541 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.851600 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.851610 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.851662 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.851672 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:05Z","lastTransitionTime":"2026-01-26T16:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.954297 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.954379 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.954411 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.954441 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:05 crc kubenswrapper[4856]: I0126 16:59:05.954462 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:05Z","lastTransitionTime":"2026-01-26T16:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.056667 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.056706 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.056714 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.056732 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.056742 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:06Z","lastTransitionTime":"2026-01-26T16:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.158971 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.159012 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.159044 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.159062 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.159071 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:06Z","lastTransitionTime":"2026-01-26T16:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.262007 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.262085 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.262106 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.262129 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.262142 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:06Z","lastTransitionTime":"2026-01-26T16:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.365145 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.365216 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.365239 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.365272 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.365297 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:06Z","lastTransitionTime":"2026-01-26T16:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.394503 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.394955 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.394974 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:59:06 crc kubenswrapper[4856]: E0126 16:59:06.395106 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:59:06 crc kubenswrapper[4856]: E0126 16:59:06.395279 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:59:06 crc kubenswrapper[4856]: E0126 16:59:06.395551 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.463648 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 17:17:55.081671222 +0000 UTC Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.467992 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.468071 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.468118 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.468142 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.468157 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:06Z","lastTransitionTime":"2026-01-26T16:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.570860 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.570922 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.570935 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.570953 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.570965 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:06Z","lastTransitionTime":"2026-01-26T16:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.673988 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.674042 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.674054 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.674071 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.674081 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:06Z","lastTransitionTime":"2026-01-26T16:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.776709 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.776761 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.776772 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.776795 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.776808 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:06Z","lastTransitionTime":"2026-01-26T16:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.879264 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.879323 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.879344 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.879366 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.879380 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:06Z","lastTransitionTime":"2026-01-26T16:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.982588 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.982623 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.982634 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.982651 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:06 crc kubenswrapper[4856]: I0126 16:59:06.982663 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:06Z","lastTransitionTime":"2026-01-26T16:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.008266 4856 generic.go:334] "Generic (PLEG): container finished" podID="ad7b59f9-beb7-49d6-a2d1-e29133e46854" containerID="62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea" exitCode=0 Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.008311 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" event={"ID":"ad7b59f9-beb7-49d6-a2d1-e29133e46854","Type":"ContainerDied","Data":"62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea"} Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.026006 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:07Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.042735 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad7b59f9-beb7-49d6-a2d1-e29133e46854\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v2l7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:07Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.066661 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e1e2c0e79e963e1ad9b28e9b6e7d69c6b6df040359eef2630b4aeb32109f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:07Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.080421 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63c75ede-5170-4db0-811b-5217ef8d72b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da26ddcfc6ccde3c9aabd63bdef3435f9a9eaab8c095bfeef5670c1295576cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xm9cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:07Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.084888 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.084919 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.084928 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.084941 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.084954 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:07Z","lastTransitionTime":"2026-01-26T16:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.090868 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af4930ff9cebd62106545fb3da2dfc93f7b591426fe4c85aa2e637b60c935f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:07Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.106200 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c09748c0963bdb50c9552f249d1135ea53e68c52babd788dad050951cb849cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:07Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.119340 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:07Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.131344 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:07Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.144351 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq622" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a742e7b-c420-46e3-9e96-e9c744af6124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8plh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq622\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:07Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.158581 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191ad5d41024c88c1a4fbc30f307dd6340fd55b93b00d22f62209d0e82be286f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf2a4d8be409a46ffe5702797e56a80023a72a19fb7dc5c49e5f4984cedc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:07Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.171601 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c823748751e9938f10fab08e33b5fffff5a6d15961ce028204131b7b69a56c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:07Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.186590 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.186619 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.186628 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.186642 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.186651 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:07Z","lastTransitionTime":"2026-01-26T16:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.202292 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:07Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.289159 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.289197 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.289206 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.289223 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.289234 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:07Z","lastTransitionTime":"2026-01-26T16:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.393128 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.393176 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.393188 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.393206 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.393216 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:07Z","lastTransitionTime":"2026-01-26T16:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.464889 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 13:41:09.996253405 +0000 UTC Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.497395 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.497468 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.497486 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.497512 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.497571 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:07Z","lastTransitionTime":"2026-01-26T16:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.599920 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.599975 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.599987 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.600005 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.600015 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:07Z","lastTransitionTime":"2026-01-26T16:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.702635 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.702680 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.702688 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.702702 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.702715 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:07Z","lastTransitionTime":"2026-01-26T16:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.805093 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.805389 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.805482 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.805616 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.805821 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:07Z","lastTransitionTime":"2026-01-26T16:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.908671 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.909105 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.909194 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.909284 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:07 crc kubenswrapper[4856]: I0126 16:59:07.909375 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:07Z","lastTransitionTime":"2026-01-26T16:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.016897 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.016927 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.016936 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.016952 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.016963 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:08Z","lastTransitionTime":"2026-01-26T16:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.075075 4856 generic.go:334] "Generic (PLEG): container finished" podID="ad7b59f9-beb7-49d6-a2d1-e29133e46854" containerID="249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8" exitCode=0 Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.075308 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" event={"ID":"ad7b59f9-beb7-49d6-a2d1-e29133e46854","Type":"ContainerDied","Data":"249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8"} Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.100277 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e1e2c0e79e963e1ad9b28e9b6e7d69c6b6df040359eef2630b4aeb32109f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:08Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.114943 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:08Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.127753 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.127812 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.127834 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.127858 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.127874 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:08Z","lastTransitionTime":"2026-01-26T16:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.133757 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad7b59f9-beb7-49d6-a2d1-e29133e46854\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v2l7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:08Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.144768 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63c75ede-5170-4db0-811b-5217ef8d72b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da26ddcfc6ccde3c9aabd63bdef3435f9a9eaab8c095bfeef5670c1295576cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xm9cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:08Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.160160 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c09748c0963bdb50c9552f249d1135ea53e68c52babd788dad050951cb849cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:08Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.175853 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:08Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.188722 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af4930ff9cebd62106545fb3da2dfc93f7b591426fe4c85aa2e637b60c935f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:08Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.203194 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191ad5d41024c88c1a4fbc30f307dd6340fd55b93b00d22f62209d0e82be286f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf2a4d8be409a46ffe5702797e56a80023a72a19fb7dc5c49e5f4984cedc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:08Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.216091 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c823748751e9938f10fab08e33b5fffff5a6d15961ce028204131b7b69a56c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:08Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.228396 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:08Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.231215 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.231257 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.231266 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.231283 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.231292 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:08Z","lastTransitionTime":"2026-01-26T16:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.241516 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:08Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.259614 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq622" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a742e7b-c420-46e3-9e96-e9c744af6124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8plh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq622\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:08Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.333258 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.333303 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.333324 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.333341 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.333364 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:08Z","lastTransitionTime":"2026-01-26T16:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.342965 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.343005 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.343015 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.343030 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.343040 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:08Z","lastTransitionTime":"2026-01-26T16:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:08 crc kubenswrapper[4856]: E0126 16:59:08.355151 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17523591-a778-4a97-aeab-8a7a93101850\\\",\\\"systemUUID\\\":\\\"ca45d056-99cb-4442-8a44-7e899628ecb2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:08Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.359266 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.359329 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.359351 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.359374 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.359396 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:08Z","lastTransitionTime":"2026-01-26T16:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:08 crc kubenswrapper[4856]: E0126 16:59:08.372254 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17523591-a778-4a97-aeab-8a7a93101850\\\",\\\"systemUUID\\\":\\\"ca45d056-99cb-4442-8a44-7e899628ecb2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:08Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.376502 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.376570 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.376588 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.376611 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.376628 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:08Z","lastTransitionTime":"2026-01-26T16:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:08 crc kubenswrapper[4856]: E0126 16:59:08.390938 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17523591-a778-4a97-aeab-8a7a93101850\\\",\\\"systemUUID\\\":\\\"ca45d056-99cb-4442-8a44-7e899628ecb2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:08Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.394539 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.394588 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.394557 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:59:08 crc kubenswrapper[4856]: E0126 16:59:08.394869 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:59:08 crc kubenswrapper[4856]: E0126 16:59:08.395006 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:59:08 crc kubenswrapper[4856]: E0126 16:59:08.395123 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.397632 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.397680 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.397692 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.397709 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.397721 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:08Z","lastTransitionTime":"2026-01-26T16:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:08 crc kubenswrapper[4856]: E0126 16:59:08.430325 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17523591-a778-4a97-aeab-8a7a93101850\\\",\\\"systemUUID\\\":\\\"ca45d056-99cb-4442-8a44-7e899628ecb2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:08Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.431099 4856 scope.go:117] "RemoveContainer" containerID="3f07438e20bdf71c752bb661084c835341999c561bb5442d75e177223881276f" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.432478 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.440133 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.440165 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.440173 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.440189 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.440199 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:08Z","lastTransitionTime":"2026-01-26T16:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:08 crc kubenswrapper[4856]: E0126 16:59:08.456654 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17523591-a778-4a97-aeab-8a7a93101850\\\",\\\"systemUUID\\\":\\\"ca45d056-99cb-4442-8a44-7e899628ecb2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:08Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:08 crc kubenswrapper[4856]: E0126 16:59:08.456814 4856 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.459663 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.459703 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.459713 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.459740 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.459755 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:08Z","lastTransitionTime":"2026-01-26T16:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.465362 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 03:50:12.578301114 +0000 UTC Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.571866 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.571930 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.571939 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.571954 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.571966 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:08Z","lastTransitionTime":"2026-01-26T16:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.674408 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.674455 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.674469 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.674486 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.674497 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:08Z","lastTransitionTime":"2026-01-26T16:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.776939 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.776992 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.777005 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.777027 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.777041 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:08Z","lastTransitionTime":"2026-01-26T16:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.880085 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.880135 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.880148 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.880193 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.880207 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:08Z","lastTransitionTime":"2026-01-26T16:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.986132 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.986386 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.986398 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.986416 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:08 crc kubenswrapper[4856]: I0126 16:59:08.986428 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:08Z","lastTransitionTime":"2026-01-26T16:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.097907 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.097958 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.097970 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.097988 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.098004 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:09Z","lastTransitionTime":"2026-01-26T16:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.102514 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" event={"ID":"ad7b59f9-beb7-49d6-a2d1-e29133e46854","Type":"ContainerStarted","Data":"fc53a6a6d4222ee5e1ac29fd5957d8cdf3fd42de72c68c85329374ce7afc4004"} Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.105451 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.108188 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b03a01b651d9f66da4dd1f6e0d29ad97c0d6ae46b644c3d997d8dc99476706df"} Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.108477 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.118290 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:09Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.133837 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad7b59f9-beb7-49d6-a2d1-e29133e46854\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc53a6a6d4222ee5e1ac29fd5957d8cdf3fd42de72c68c85329374ce7afc4004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v2l7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:09Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.152853 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e1e2c0e79e963e1ad9b28e9b6e7d69c6b6df040359eef2630b4aeb32109f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:09Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.165989 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63c75ede-5170-4db0-811b-5217ef8d72b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da26ddcfc6ccde3c9aabd63bdef3435f9a9eaab8c095bfeef5670c1295576cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xm9cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:09Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.177584 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af4930ff9cebd62106545fb3da2dfc93f7b591426fe4c85aa2e637b60c935f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:09Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.191839 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c09748c0963bdb50c9552f249d1135ea53e68c52babd788dad050951cb849cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:09Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.198202 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579"] Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.198830 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.199947 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.199983 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.199993 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.200008 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.200018 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:09Z","lastTransitionTime":"2026-01-26T16:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.200680 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.201311 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.205461 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:09Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.224818 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59ecd87a-c5db-446d-ad3e-cfabbd648c1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89975b8f9428f81ab5d3fb48ced5dd9c837bea2feea3b89f5f7ff8d7d5d15b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40d432159a07ba20bf95f058bf8a597d67cbd8d852519bed035694f8ba3d8ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ba0f131eee14df389391e46bc772894e49e976c3663df5fb3bc98b3cb4a3d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f07438e20bdf71c752bb661084c835341999c561bb5442d75e177223881276f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f07438e20bdf71c752bb661084c835341999c561bb5442d75e177223881276f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T16:58:51Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 16:58:43.431198 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 16:58:43.432227 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-99450334/tls.crt::/tmp/serving-cert-99450334/tls.key\\\\\\\"\\\\nI0126 16:58:50.828320 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 16:58:50.830449 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 16:58:50.830472 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 16:58:50.830509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 16:58:50.830515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 16:58:50.834885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 16:58:50.834958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 16:58:50.834991 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 16:58:50.834905 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 16:58:50.835015 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 16:58:50.835084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 16:58:50.835106 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 16:58:50.835116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 16:58:50.837641 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2daa3f13d37b7ae19dfca406b6b5cfdcd4f211287f7d410193f2fee36a24553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:09Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.238420 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191ad5d41024c88c1a4fbc30f307dd6340fd55b93b00d22f62209d0e82be286f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf2a4d8be409a46ffe5702797e56a80023a72a19fb7dc5c49e5f4984cedc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:09Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.249932 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c823748751e9938f10fab08e33b5fffff5a6d15961ce028204131b7b69a56c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:09Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.261727 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:09Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.271442 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:09Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.283814 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq622" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a742e7b-c420-46e3-9e96-e9c744af6124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8plh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq622\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:09Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.295698 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63c75ede-5170-4db0-811b-5217ef8d72b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da26ddcfc6ccde3c9aabd63bdef3435f9a9eaab8c095bfeef5670c1295576cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xm9cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:09Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.295960 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a77e85f9-b566-4807-bb92-55963c97b93c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-v7579\" (UID: \"a77e85f9-b566-4807-bb92-55963c97b93c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.296009 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n9h7\" (UniqueName: \"kubernetes.io/projected/a77e85f9-b566-4807-bb92-55963c97b93c-kube-api-access-4n9h7\") pod \"ovnkube-control-plane-749d76644c-v7579\" (UID: \"a77e85f9-b566-4807-bb92-55963c97b93c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.296100 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a77e85f9-b566-4807-bb92-55963c97b93c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-v7579\" (UID: \"a77e85f9-b566-4807-bb92-55963c97b93c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.296416 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a77e85f9-b566-4807-bb92-55963c97b93c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-v7579\" (UID: \"a77e85f9-b566-4807-bb92-55963c97b93c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.302236 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.302276 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.302285 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.302300 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.302311 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:09Z","lastTransitionTime":"2026-01-26T16:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.308302 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af4930ff9cebd62106545fb3da2dfc93f7b591426fe4c85aa2e637b60c935f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:09Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.330544 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c09748c0963bdb50c9552f249d1135ea53e68c52babd788dad050951cb849cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:09Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.344192 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:09Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.356243 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77e85f9-b566-4807-bb92-55963c97b93c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-v7579\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:09Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.382239 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq622" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a742e7b-c420-46e3-9e96-e9c744af6124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8plh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq622\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:09Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.396992 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a77e85f9-b566-4807-bb92-55963c97b93c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-v7579\" (UID: \"a77e85f9-b566-4807-bb92-55963c97b93c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.397040 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a77e85f9-b566-4807-bb92-55963c97b93c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-v7579\" (UID: \"a77e85f9-b566-4807-bb92-55963c97b93c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.397061 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n9h7\" (UniqueName: \"kubernetes.io/projected/a77e85f9-b566-4807-bb92-55963c97b93c-kube-api-access-4n9h7\") pod \"ovnkube-control-plane-749d76644c-v7579\" (UID: \"a77e85f9-b566-4807-bb92-55963c97b93c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.397100 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a77e85f9-b566-4807-bb92-55963c97b93c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-v7579\" (UID: \"a77e85f9-b566-4807-bb92-55963c97b93c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.397703 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a77e85f9-b566-4807-bb92-55963c97b93c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-v7579\" (UID: \"a77e85f9-b566-4807-bb92-55963c97b93c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.397868 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a77e85f9-b566-4807-bb92-55963c97b93c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-v7579\" (UID: \"a77e85f9-b566-4807-bb92-55963c97b93c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.402227 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59ecd87a-c5db-446d-ad3e-cfabbd648c1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89975b8f9428f81ab5d3fb48ced5dd9c837bea2feea3b89f5f7ff8d7d5d15b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40d432159a07ba20bf95f058bf8a597d67cbd8d852519bed035694f8ba3d8ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ba0f131eee14df389391e46bc772894e49e976c3663df5fb3bc98b3cb4a3d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03a01b651d9f66da4dd1f6e0d29ad97c0d6ae46b644c3d997d8dc99476706df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f07438e20bdf71c752bb661084c835341999c561bb5442d75e177223881276f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T16:58:51Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 16:58:43.431198 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 16:58:43.432227 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-99450334/tls.crt::/tmp/serving-cert-99450334/tls.key\\\\\\\"\\\\nI0126 16:58:50.828320 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 16:58:50.830449 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 16:58:50.830472 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 16:58:50.830509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 16:58:50.830515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 16:58:50.834885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 16:58:50.834958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 16:58:50.834991 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 16:58:50.834905 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 16:58:50.835015 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 16:58:50.835084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 16:58:50.835106 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 16:58:50.835116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 16:58:50.837641 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2daa3f13d37b7ae19dfca406b6b5cfdcd4f211287f7d410193f2fee36a24553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:09Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.405116 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.405178 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.405201 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.405231 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.405253 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:09Z","lastTransitionTime":"2026-01-26T16:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.409098 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a77e85f9-b566-4807-bb92-55963c97b93c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-v7579\" (UID: \"a77e85f9-b566-4807-bb92-55963c97b93c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.428348 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n9h7\" (UniqueName: \"kubernetes.io/projected/a77e85f9-b566-4807-bb92-55963c97b93c-kube-api-access-4n9h7\") pod \"ovnkube-control-plane-749d76644c-v7579\" (UID: \"a77e85f9-b566-4807-bb92-55963c97b93c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.431191 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191ad5d41024c88c1a4fbc30f307dd6340fd55b93b00d22f62209d0e82be286f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf2a4d8be409a46ffe5702797e56a80023a72a19fb7dc5c49e5f4984cedc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:09Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.444258 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c823748751e9938f10fab08e33b5fffff5a6d15961ce028204131b7b69a56c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:09Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.456287 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:09Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.465789 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 17:27:52.512562162 +0000 UTC Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.467986 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:09Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.481651 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:09Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.497995 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad7b59f9-beb7-49d6-a2d1-e29133e46854\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc53a6a6d4222ee5e1ac29fd5957d8cdf3fd42de72c68c85329374ce7afc4004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v2l7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:09Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.509277 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.509323 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.509338 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.509358 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.509371 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:09Z","lastTransitionTime":"2026-01-26T16:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.509696 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.521805 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e1e2c0e79e963e1ad9b28e9b6e7d69c6b6df040359eef2630b4aeb32109f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:09Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.612059 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.612099 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.612107 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.612123 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.612133 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:09Z","lastTransitionTime":"2026-01-26T16:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.715045 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.715081 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.715091 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.715109 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.715120 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:09Z","lastTransitionTime":"2026-01-26T16:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.817905 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.817944 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.817952 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.817968 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.817978 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:09Z","lastTransitionTime":"2026-01-26T16:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.920774 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.920813 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.920822 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.920856 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:09 crc kubenswrapper[4856]: I0126 16:59:09.920865 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:09Z","lastTransitionTime":"2026-01-26T16:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.023017 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.023045 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.023054 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.023068 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.023077 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:10Z","lastTransitionTime":"2026-01-26T16:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.150822 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.150867 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.150877 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.150897 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.150911 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:10Z","lastTransitionTime":"2026-01-26T16:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.155127 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579" event={"ID":"a77e85f9-b566-4807-bb92-55963c97b93c","Type":"ContainerStarted","Data":"8ba87c9fc35c230bbee201a5176cb467309f0b9aee82dfc81f3b677a15486d02"} Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.155165 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579" event={"ID":"a77e85f9-b566-4807-bb92-55963c97b93c","Type":"ContainerStarted","Data":"8f4c467dd37bd5f0ee2c1948583c7bc17be187850c547ffd809acbda9b7dd364"} Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.255983 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.256015 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.256023 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.256039 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.256051 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:10Z","lastTransitionTime":"2026-01-26T16:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.296897 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-295wr"] Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.297374 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 16:59:10 crc kubenswrapper[4856]: E0126 16:59:10.297444 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.316177 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af4930ff9cebd62106545fb3da2dfc93f7b591426fe4c85aa2e637b60c935f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:10Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.381456 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.381502 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.381515 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.381555 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.381571 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:10Z","lastTransitionTime":"2026-01-26T16:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.382272 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c09748c0963bdb50c9552f249d1135ea53e68c52babd788dad050951cb849cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:10Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.394788 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.394892 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.394796 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:59:10 crc kubenswrapper[4856]: E0126 16:59:10.394930 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:59:10 crc kubenswrapper[4856]: E0126 16:59:10.395058 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:59:10 crc kubenswrapper[4856]: E0126 16:59:10.395138 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.396547 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:10Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.408786 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77e85f9-b566-4807-bb92-55963c97b93c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-v7579\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:10Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.410012 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf98h\" (UniqueName: \"kubernetes.io/projected/12e50462-28e6-4531-ada4-e652310e6cce-kube-api-access-tf98h\") pod \"network-metrics-daemon-295wr\" (UID: \"12e50462-28e6-4531-ada4-e652310e6cce\") " pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.410418 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12e50462-28e6-4531-ada4-e652310e6cce-metrics-certs\") pod \"network-metrics-daemon-295wr\" (UID: \"12e50462-28e6-4531-ada4-e652310e6cce\") " pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.424696 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59ecd87a-c5db-446d-ad3e-cfabbd648c1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89975b8f9428f81ab5d3fb48ced5dd9c837bea2feea3b89f5f7ff8d7d5d15b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40d432159a07ba20bf95f058bf8a597d67cbd8d852519bed035694f8ba3d8ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ba0f131eee14df389391e46bc772894e49e976c3663df5fb3bc98b3cb4a3d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03a01b651d9f66da4dd1f6e0d29ad97c0d6ae46b644c3d997d8dc99476706df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f07438e20bdf71c752bb661084c835341999c561bb5442d75e177223881276f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T16:58:51Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 16:58:43.431198 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 16:58:43.432227 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-99450334/tls.crt::/tmp/serving-cert-99450334/tls.key\\\\\\\"\\\\nI0126 16:58:50.828320 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 16:58:50.830449 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 16:58:50.830472 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 16:58:50.830509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 16:58:50.830515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 16:58:50.834885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 16:58:50.834958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 16:58:50.834991 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 16:58:50.834905 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 16:58:50.835015 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 16:58:50.835084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 16:58:50.835106 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 16:58:50.835116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 16:58:50.837641 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2daa3f13d37b7ae19dfca406b6b5cfdcd4f211287f7d410193f2fee36a24553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:10Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.436759 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191ad5d41024c88c1a4fbc30f307dd6340fd55b93b00d22f62209d0e82be286f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf2a4d8be409a46ffe5702797e56a80023a72a19fb7dc5c49e5f4984cedc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:10Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.448228 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c823748751e9938f10fab08e33b5fffff5a6d15961ce028204131b7b69a56c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:10Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.466848 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:10Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.467054 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 15:12:27.601921122 +0000 UTC Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.479767 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:10Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.483944 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.483989 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.484001 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.484019 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.484033 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:10Z","lastTransitionTime":"2026-01-26T16:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.493730 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq622" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a742e7b-c420-46e3-9e96-e9c744af6124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8plh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq622\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:10Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.543262 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.543398 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf98h\" (UniqueName: \"kubernetes.io/projected/12e50462-28e6-4531-ada4-e652310e6cce-kube-api-access-tf98h\") pod \"network-metrics-daemon-295wr\" (UID: \"12e50462-28e6-4531-ada4-e652310e6cce\") " pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 16:59:10 crc kubenswrapper[4856]: E0126 16:59:10.543427 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:59:26.543403556 +0000 UTC m=+62.496657537 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.543481 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.543568 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.543598 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.543627 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12e50462-28e6-4531-ada4-e652310e6cce-metrics-certs\") pod \"network-metrics-daemon-295wr\" (UID: \"12e50462-28e6-4531-ada4-e652310e6cce\") " pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 16:59:10 crc kubenswrapper[4856]: E0126 16:59:10.543630 4856 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 16:59:10 crc kubenswrapper[4856]: E0126 16:59:10.543672 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 16:59:26.543663653 +0000 UTC m=+62.496917634 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.543644 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:59:10 crc kubenswrapper[4856]: E0126 16:59:10.543775 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 16:59:10 crc kubenswrapper[4856]: E0126 16:59:10.543788 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 16:59:10 crc kubenswrapper[4856]: E0126 16:59:10.543798 4856 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:59:10 crc kubenswrapper[4856]: E0126 16:59:10.543810 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 16:59:10 crc kubenswrapper[4856]: E0126 16:59:10.543840 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 16:59:10 crc kubenswrapper[4856]: E0126 16:59:10.543847 4856 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 16:59:10 crc kubenswrapper[4856]: E0126 16:59:10.543858 4856 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:59:10 crc kubenswrapper[4856]: E0126 16:59:10.543817 4856 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 16:59:10 crc kubenswrapper[4856]: E0126 16:59:10.543824 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 16:59:26.543816017 +0000 UTC m=+62.497069988 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:59:10 crc kubenswrapper[4856]: E0126 16:59:10.543935 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 16:59:26.54392433 +0000 UTC m=+62.497178311 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 16:59:10 crc kubenswrapper[4856]: E0126 16:59:10.543948 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 16:59:26.543942511 +0000 UTC m=+62.497196492 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:59:10 crc kubenswrapper[4856]: E0126 16:59:10.543970 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12e50462-28e6-4531-ada4-e652310e6cce-metrics-certs podName:12e50462-28e6-4531-ada4-e652310e6cce nodeName:}" failed. No retries permitted until 2026-01-26 16:59:11.043962901 +0000 UTC m=+46.997216882 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/12e50462-28e6-4531-ada4-e652310e6cce-metrics-certs") pod "network-metrics-daemon-295wr" (UID: "12e50462-28e6-4531-ada4-e652310e6cce") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.587894 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.587937 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.587946 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.587964 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.587974 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:10Z","lastTransitionTime":"2026-01-26T16:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.595793 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:10Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.605830 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf98h\" (UniqueName: \"kubernetes.io/projected/12e50462-28e6-4531-ada4-e652310e6cce-kube-api-access-tf98h\") pod \"network-metrics-daemon-295wr\" (UID: \"12e50462-28e6-4531-ada4-e652310e6cce\") " pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.612332 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad7b59f9-beb7-49d6-a2d1-e29133e46854\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc53a6a6d4222ee5e1ac29fd5957d8cdf3fd42de72c68c85329374ce7afc4004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v2l7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:10Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.632785 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e1e2c0e79e963e1ad9b28e9b6e7d69c6b6df040359eef2630b4aeb32109f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:10Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.644694 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-295wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e50462-28e6-4531-ada4-e652310e6cce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-295wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:10Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.657358 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63c75ede-5170-4db0-811b-5217ef8d72b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da26ddcfc6ccde3c9aabd63bdef3435f9a9eaab8c095bfeef5670c1295576cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xm9cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:10Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.690386 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.690422 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.690431 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.690445 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.690455 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:10Z","lastTransitionTime":"2026-01-26T16:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.792836 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.792916 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.792932 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.792965 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.792982 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:10Z","lastTransitionTime":"2026-01-26T16:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.896169 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.896225 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.896241 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.896271 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.896291 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:10Z","lastTransitionTime":"2026-01-26T16:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.998582 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.998941 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.998955 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.998972 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:10 crc kubenswrapper[4856]: I0126 16:59:10.998985 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:10Z","lastTransitionTime":"2026-01-26T16:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.047418 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12e50462-28e6-4531-ada4-e652310e6cce-metrics-certs\") pod \"network-metrics-daemon-295wr\" (UID: \"12e50462-28e6-4531-ada4-e652310e6cce\") " pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 16:59:11 crc kubenswrapper[4856]: E0126 16:59:11.047582 4856 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 16:59:11 crc kubenswrapper[4856]: E0126 16:59:11.047637 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12e50462-28e6-4531-ada4-e652310e6cce-metrics-certs podName:12e50462-28e6-4531-ada4-e652310e6cce nodeName:}" failed. No retries permitted until 2026-01-26 16:59:12.047623477 +0000 UTC m=+48.000877458 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/12e50462-28e6-4531-ada4-e652310e6cce-metrics-certs") pod "network-metrics-daemon-295wr" (UID: "12e50462-28e6-4531-ada4-e652310e6cce") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.102102 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.102138 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.102150 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.102167 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.102203 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:11Z","lastTransitionTime":"2026-01-26T16:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.162659 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pxh94_ab5b6f50-172b-4535-a0f9-5d103bcab4e7/ovnkube-controller/0.log" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.166620 4856 generic.go:334] "Generic (PLEG): container finished" podID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerID="36e1e2c0e79e963e1ad9b28e9b6e7d69c6b6df040359eef2630b4aeb32109f38" exitCode=1 Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.166706 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" event={"ID":"ab5b6f50-172b-4535-a0f9-5d103bcab4e7","Type":"ContainerDied","Data":"36e1e2c0e79e963e1ad9b28e9b6e7d69c6b6df040359eef2630b4aeb32109f38"} Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.167676 4856 scope.go:117] "RemoveContainer" containerID="36e1e2c0e79e963e1ad9b28e9b6e7d69c6b6df040359eef2630b4aeb32109f38" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.168420 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579" event={"ID":"a77e85f9-b566-4807-bb92-55963c97b93c","Type":"ContainerStarted","Data":"c03dc794e9c2035f2e1983eacad3e51d76223cb1b82e2f402c73f9453e4bd2f0"} Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.187183 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:11Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.200133 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:11Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.204760 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.204790 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.204821 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.204836 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.204845 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:11Z","lastTransitionTime":"2026-01-26T16:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.216298 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq622" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a742e7b-c420-46e3-9e96-e9c744af6124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8plh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq622\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:11Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.233078 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59ecd87a-c5db-446d-ad3e-cfabbd648c1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89975b8f9428f81ab5d3fb48ced5dd9c837bea2feea3b89f5f7ff8d7d5d15b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40d432159a07ba20bf95f058bf8a597d67cbd8d852519bed035694f8ba3d8ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ba0f131eee14df389391e46bc772894e49e976c3663df5fb3bc98b3cb4a3d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03a01b651d9f66da4dd1f6e0d29ad97c0d6ae46b644c3d997d8dc99476706df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f07438e20bdf71c752bb661084c835341999c561bb5442d75e177223881276f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T16:58:51Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 16:58:43.431198 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 16:58:43.432227 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-99450334/tls.crt::/tmp/serving-cert-99450334/tls.key\\\\\\\"\\\\nI0126 16:58:50.828320 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 16:58:50.830449 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 16:58:50.830472 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 16:58:50.830509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 16:58:50.830515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 16:58:50.834885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 16:58:50.834958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 16:58:50.834991 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 16:58:50.834905 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 16:58:50.835015 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 16:58:50.835084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 16:58:50.835106 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 16:58:50.835116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 16:58:50.837641 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2daa3f13d37b7ae19dfca406b6b5cfdcd4f211287f7d410193f2fee36a24553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:11Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.246115 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191ad5d41024c88c1a4fbc30f307dd6340fd55b93b00d22f62209d0e82be286f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf2a4d8be409a46ffe5702797e56a80023a72a19fb7dc5c49e5f4984cedc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:11Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.259003 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c823748751e9938f10fab08e33b5fffff5a6d15961ce028204131b7b69a56c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:11Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.271340 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:11Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.284901 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad7b59f9-beb7-49d6-a2d1-e29133e46854\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc53a6a6d4222ee5e1ac29fd5957d8cdf3fd42de72c68c85329374ce7afc4004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v2l7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:11Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.301683 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e1e2c0e79e963e1ad9b28e9b6e7d69c6b6df040359eef2630b4aeb32109f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e1e2c0e79e963e1ad9b28e9b6e7d69c6b6df040359eef2630b4aeb32109f38\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"ector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 16:59:10.414325 6042 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 16:59:10.414367 6042 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 16:59:10.414628 6042 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 16:59:10.414655 6042 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 16:59:10.414676 6042 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 16:59:10.414682 6042 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 16:59:10.414693 6042 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 16:59:10.414698 6042 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 16:59:10.414726 6042 factory.go:656] Stopping watch factory\\\\nI0126 16:59:10.414742 6042 ovnkube.go:599] Stopped ovnkube\\\\nI0126 16:59:10.414752 6042 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 16:59:10.414783 6042 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 16:59:10.414794 6042 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 16:59:10.414800 6042 handler.go:208] Removed *v1.Node event han\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:11Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.307807 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.307848 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.307860 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.307879 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.307893 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:11Z","lastTransitionTime":"2026-01-26T16:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.315673 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-295wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e50462-28e6-4531-ada4-e652310e6cce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-295wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:11Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.329107 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63c75ede-5170-4db0-811b-5217ef8d72b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da26ddcfc6ccde3c9aabd63bdef3435f9a9eaab8c095bfeef5670c1295576cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xm9cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:11Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.342901 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77e85f9-b566-4807-bb92-55963c97b93c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-v7579\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:11Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.356162 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af4930ff9cebd62106545fb3da2dfc93f7b591426fe4c85aa2e637b60c935f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:11Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.373320 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c09748c0963bdb50c9552f249d1135ea53e68c52babd788dad050951cb849cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:11Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.386175 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:11Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.397397 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:11Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.410860 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.410945 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.410958 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.410978 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.410990 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:11Z","lastTransitionTime":"2026-01-26T16:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.411447 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77e85f9-b566-4807-bb92-55963c97b93c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba87c9fc35c230bbee201a5176cb467309f0b9aee82dfc81f3b677a15486d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03dc794e9c2035f2e1983eacad3e51d76223cb1b82e2f402c73f9453e4bd2f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-v7579\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:11Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.424862 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af4930ff9cebd62106545fb3da2dfc93f7b591426fe4c85aa2e637b60c935f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:11Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.439972 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c09748c0963bdb50c9552f249d1135ea53e68c52babd788dad050951cb849cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:11Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.454103 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c823748751e9938f10fab08e33b5fffff5a6d15961ce028204131b7b69a56c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:11Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.467344 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 09:48:19.434967119 +0000 UTC Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.468439 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:11Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.481744 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:11Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.499113 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq622" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a742e7b-c420-46e3-9e96-e9c744af6124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8plh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq622\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:11Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.513386 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.513423 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.513435 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.513451 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.513462 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:11Z","lastTransitionTime":"2026-01-26T16:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.513898 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59ecd87a-c5db-446d-ad3e-cfabbd648c1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89975b8f9428f81ab5d3fb48ced5dd9c837bea2feea3b89f5f7ff8d7d5d15b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40d432159a07ba20bf95f058bf8a597d67cbd8d852519bed035694f8ba3d8ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ba0f131eee14df389391e46bc772894e49e976c3663df5fb3bc98b3cb4a3d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03a01b651d9f66da4dd1f6e0d29ad97c0d6ae46b644c3d997d8dc99476706df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f07438e20bdf71c752bb661084c835341999c561bb5442d75e177223881276f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T16:58:51Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 16:58:43.431198 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 16:58:43.432227 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-99450334/tls.crt::/tmp/serving-cert-99450334/tls.key\\\\\\\"\\\\nI0126 16:58:50.828320 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 16:58:50.830449 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 16:58:50.830472 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 16:58:50.830509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 16:58:50.830515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 16:58:50.834885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 16:58:50.834958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 16:58:50.834991 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 16:58:50.834905 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 16:58:50.835015 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 16:58:50.835084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 16:58:50.835106 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 16:58:50.835116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 16:58:50.837641 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2daa3f13d37b7ae19dfca406b6b5cfdcd4f211287f7d410193f2fee36a24553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:11Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.532961 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191ad5d41024c88c1a4fbc30f307dd6340fd55b93b00d22f62209d0e82be286f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf2a4d8be409a46ffe5702797e56a80023a72a19fb7dc5c49e5f4984cedc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:11Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.546566 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-295wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e50462-28e6-4531-ada4-e652310e6cce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-295wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:11Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.568378 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:11Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.582305 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad7b59f9-beb7-49d6-a2d1-e29133e46854\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc53a6a6d4222ee5e1ac29fd5957d8cdf3fd42de72c68c85329374ce7afc4004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v2l7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:11Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.604700 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e1e2c0e79e963e1ad9b28e9b6e7d69c6b6df040359eef2630b4aeb32109f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e1e2c0e79e963e1ad9b28e9b6e7d69c6b6df040359eef2630b4aeb32109f38\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"ector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 16:59:10.414325 6042 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 16:59:10.414367 6042 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 16:59:10.414628 6042 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 16:59:10.414655 6042 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 16:59:10.414676 6042 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 16:59:10.414682 6042 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 16:59:10.414693 6042 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 16:59:10.414698 6042 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 16:59:10.414726 6042 factory.go:656] Stopping watch factory\\\\nI0126 16:59:10.414742 6042 ovnkube.go:599] Stopped ovnkube\\\\nI0126 16:59:10.414752 6042 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 16:59:10.414783 6042 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 16:59:10.414794 6042 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 16:59:10.414800 6042 handler.go:208] Removed *v1.Node event han\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:11Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.615966 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.616010 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.616021 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.616041 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.616052 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:11Z","lastTransitionTime":"2026-01-26T16:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.617516 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63c75ede-5170-4db0-811b-5217ef8d72b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da26ddcfc6ccde3c9aabd63bdef3435f9a9eaab8c095bfeef5670c1295576cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xm9cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:11Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.719381 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.719428 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.719438 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.719454 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.719466 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:11Z","lastTransitionTime":"2026-01-26T16:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.956097 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.956184 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.956224 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.956254 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:11 crc kubenswrapper[4856]: I0126 16:59:11.956271 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:11Z","lastTransitionTime":"2026-01-26T16:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.057352 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12e50462-28e6-4531-ada4-e652310e6cce-metrics-certs\") pod \"network-metrics-daemon-295wr\" (UID: \"12e50462-28e6-4531-ada4-e652310e6cce\") " pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 16:59:12 crc kubenswrapper[4856]: E0126 16:59:12.057808 4856 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 16:59:12 crc kubenswrapper[4856]: E0126 16:59:12.058003 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12e50462-28e6-4531-ada4-e652310e6cce-metrics-certs podName:12e50462-28e6-4531-ada4-e652310e6cce nodeName:}" failed. No retries permitted until 2026-01-26 16:59:14.05791474 +0000 UTC m=+50.011168751 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/12e50462-28e6-4531-ada4-e652310e6cce-metrics-certs") pod "network-metrics-daemon-295wr" (UID: "12e50462-28e6-4531-ada4-e652310e6cce") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.059605 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.059664 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.059677 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.059694 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.059707 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:12Z","lastTransitionTime":"2026-01-26T16:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.162866 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.162925 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.162936 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.162993 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.163006 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:12Z","lastTransitionTime":"2026-01-26T16:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.176800 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pxh94_ab5b6f50-172b-4535-a0f9-5d103bcab4e7/ovnkube-controller/0.log" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.183491 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" event={"ID":"ab5b6f50-172b-4535-a0f9-5d103bcab4e7","Type":"ContainerStarted","Data":"7c166114ee4e41f0a7e4b0590da090e98c319ef6eda0b9611419dfc55ceb139c"} Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.184297 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.199520 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:12Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.217550 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad7b59f9-beb7-49d6-a2d1-e29133e46854\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc53a6a6d4222ee5e1ac29fd5957d8cdf3fd42de72c68c85329374ce7afc4004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v2l7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:12Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.243595 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c166114ee4e41f0a7e4b0590da090e98c319ef6eda0b9611419dfc55ceb139c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e1e2c0e79e963e1ad9b28e9b6e7d69c6b6df040359eef2630b4aeb32109f38\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"ector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 16:59:10.414325 6042 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 16:59:10.414367 6042 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 16:59:10.414628 6042 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 16:59:10.414655 6042 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 16:59:10.414676 6042 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 16:59:10.414682 6042 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 16:59:10.414693 6042 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 16:59:10.414698 6042 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 16:59:10.414726 6042 factory.go:656] Stopping watch factory\\\\nI0126 16:59:10.414742 6042 ovnkube.go:599] Stopped ovnkube\\\\nI0126 16:59:10.414752 6042 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 16:59:10.414783 6042 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 16:59:10.414794 6042 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 16:59:10.414800 6042 handler.go:208] Removed *v1.Node event han\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:12Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.265813 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.265866 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.265885 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.265905 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.265918 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:12Z","lastTransitionTime":"2026-01-26T16:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.282519 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-295wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e50462-28e6-4531-ada4-e652310e6cce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-295wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:12Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.303447 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63c75ede-5170-4db0-811b-5217ef8d72b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da26ddcfc6ccde3c9aabd63bdef3435f9a9eaab8c095bfeef5670c1295576cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xm9cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:12Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.313497 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77e85f9-b566-4807-bb92-55963c97b93c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba87c9fc35c230bbee201a5176cb467309f0b9aee82dfc81f3b677a15486d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03dc794e9c2035f2e1983eacad3e51d76223cb1b82e2f402c73f9453e4bd2f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-v7579\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:12Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.325824 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af4930ff9cebd62106545fb3da2dfc93f7b591426fe4c85aa2e637b60c935f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:12Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.340393 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c09748c0963bdb50c9552f249d1135ea53e68c52babd788dad050951cb849cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:12Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.353136 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:12Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.363620 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:12Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.367727 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.367755 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.367763 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.367778 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.367788 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:12Z","lastTransitionTime":"2026-01-26T16:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.375356 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:12Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.388807 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq622" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a742e7b-c420-46e3-9e96-e9c744af6124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8plh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq622\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:12Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.394710 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.394823 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.394799 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.394782 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:59:12 crc kubenswrapper[4856]: E0126 16:59:12.395045 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:59:12 crc kubenswrapper[4856]: E0126 16:59:12.398730 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:59:12 crc kubenswrapper[4856]: E0126 16:59:12.398919 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:59:12 crc kubenswrapper[4856]: E0126 16:59:12.399052 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.406642 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59ecd87a-c5db-446d-ad3e-cfabbd648c1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89975b8f9428f81ab5d3fb48ced5dd9c837bea2feea3b89f5f7ff8d7d5d15b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40d432159a07ba20bf95f058bf8a597d67cbd8d852519bed035694f8ba3d8ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ba0f131eee14df389391e46bc772894e49e976c3663df5fb3bc98b3cb4a3d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03a01b651d9f66da4dd1f6e0d29ad97c0d6ae46b644c3d997d8dc99476706df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f07438e20bdf71c752bb661084c835341999c561bb5442d75e177223881276f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T16:58:51Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 16:58:43.431198 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 16:58:43.432227 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-99450334/tls.crt::/tmp/serving-cert-99450334/tls.key\\\\\\\"\\\\nI0126 16:58:50.828320 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 16:58:50.830449 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 16:58:50.830472 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 16:58:50.830509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 16:58:50.830515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 16:58:50.834885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 16:58:50.834958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 16:58:50.834991 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 16:58:50.834905 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 16:58:50.835015 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 16:58:50.835084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 16:58:50.835106 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 16:58:50.835116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 16:58:50.837641 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2daa3f13d37b7ae19dfca406b6b5cfdcd4f211287f7d410193f2fee36a24553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:12Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.422136 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191ad5d41024c88c1a4fbc30f307dd6340fd55b93b00d22f62209d0e82be286f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf2a4d8be409a46ffe5702797e56a80023a72a19fb7dc5c49e5f4984cedc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:12Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.436682 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c823748751e9938f10fab08e33b5fffff5a6d15961ce028204131b7b69a56c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:12Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.468609 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 09:57:19.547570648 +0000 UTC Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.471347 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.471388 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.471400 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.471421 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.471433 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:12Z","lastTransitionTime":"2026-01-26T16:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.574154 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.574305 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.574332 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.574364 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.574388 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:12Z","lastTransitionTime":"2026-01-26T16:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.677442 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.677493 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.677508 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.677555 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.677568 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:12Z","lastTransitionTime":"2026-01-26T16:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.780590 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.780640 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.780655 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.780675 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.780691 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:12Z","lastTransitionTime":"2026-01-26T16:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.883143 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.883194 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.883204 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.883223 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.883235 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:12Z","lastTransitionTime":"2026-01-26T16:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.986578 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.986651 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.986669 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.986691 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:12 crc kubenswrapper[4856]: I0126 16:59:12.986709 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:12Z","lastTransitionTime":"2026-01-26T16:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.088788 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.088853 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.088866 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.088882 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.088891 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:13Z","lastTransitionTime":"2026-01-26T16:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.189111 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pxh94_ab5b6f50-172b-4535-a0f9-5d103bcab4e7/ovnkube-controller/1.log" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.190065 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pxh94_ab5b6f50-172b-4535-a0f9-5d103bcab4e7/ovnkube-controller/0.log" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.190599 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.190664 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.190675 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.190689 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.190699 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:13Z","lastTransitionTime":"2026-01-26T16:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.193267 4856 generic.go:334] "Generic (PLEG): container finished" podID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerID="7c166114ee4e41f0a7e4b0590da090e98c319ef6eda0b9611419dfc55ceb139c" exitCode=1 Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.193324 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" event={"ID":"ab5b6f50-172b-4535-a0f9-5d103bcab4e7","Type":"ContainerDied","Data":"7c166114ee4e41f0a7e4b0590da090e98c319ef6eda0b9611419dfc55ceb139c"} Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.193376 4856 scope.go:117] "RemoveContainer" containerID="36e1e2c0e79e963e1ad9b28e9b6e7d69c6b6df040359eef2630b4aeb32109f38" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.194828 4856 scope.go:117] "RemoveContainer" containerID="7c166114ee4e41f0a7e4b0590da090e98c319ef6eda0b9611419dfc55ceb139c" Jan 26 16:59:13 crc kubenswrapper[4856]: E0126 16:59:13.195223 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pxh94_openshift-ovn-kubernetes(ab5b6f50-172b-4535-a0f9-5d103bcab4e7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.214944 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af4930ff9cebd62106545fb3da2dfc93f7b591426fe4c85aa2e637b60c935f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:13Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.233569 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c09748c0963bdb50c9552f249d1135ea53e68c52babd788dad050951cb849cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:13Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.245872 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:13Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.261511 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77e85f9-b566-4807-bb92-55963c97b93c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba87c9fc35c230bbee201a5176cb467309f0b9aee82dfc81f3b677a15486d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03dc794e9c2035f2e1983eacad3e51d76223cb1b82e2f402c73f9453e4bd2f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-v7579\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:13Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.275636 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:13Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.290287 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq622" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a742e7b-c420-46e3-9e96-e9c744af6124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8plh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq622\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:13Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.293781 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.293811 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.293820 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.293833 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.293843 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:13Z","lastTransitionTime":"2026-01-26T16:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.304969 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59ecd87a-c5db-446d-ad3e-cfabbd648c1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89975b8f9428f81ab5d3fb48ced5dd9c837bea2feea3b89f5f7ff8d7d5d15b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40d432159a07ba20bf95f058bf8a597d67cbd8d852519bed035694f8ba3d8ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ba0f131eee14df389391e46bc772894e49e976c3663df5fb3bc98b3cb4a3d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03a01b651d9f66da4dd1f6e0d29ad97c0d6ae46b644c3d997d8dc99476706df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f07438e20bdf71c752bb661084c835341999c561bb5442d75e177223881276f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T16:58:51Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 16:58:43.431198 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 16:58:43.432227 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-99450334/tls.crt::/tmp/serving-cert-99450334/tls.key\\\\\\\"\\\\nI0126 16:58:50.828320 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 16:58:50.830449 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 16:58:50.830472 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 16:58:50.830509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 16:58:50.830515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 16:58:50.834885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 16:58:50.834958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 16:58:50.834991 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 16:58:50.834905 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 16:58:50.835015 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 16:58:50.835084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 16:58:50.835106 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 16:58:50.835116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 16:58:50.837641 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2daa3f13d37b7ae19dfca406b6b5cfdcd4f211287f7d410193f2fee36a24553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:13Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.317429 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191ad5d41024c88c1a4fbc30f307dd6340fd55b93b00d22f62209d0e82be286f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf2a4d8be409a46ffe5702797e56a80023a72a19fb7dc5c49e5f4984cedc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:13Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.327946 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c823748751e9938f10fab08e33b5fffff5a6d15961ce028204131b7b69a56c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:13Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.339263 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:13Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.351166 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:13Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.364624 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad7b59f9-beb7-49d6-a2d1-e29133e46854\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc53a6a6d4222ee5e1ac29fd5957d8cdf3fd42de72c68c85329374ce7afc4004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v2l7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:13Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.384810 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c166114ee4e41f0a7e4b0590da090e98c319ef6eda0b9611419dfc55ceb139c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e1e2c0e79e963e1ad9b28e9b6e7d69c6b6df040359eef2630b4aeb32109f38\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"ector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 16:59:10.414325 6042 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 16:59:10.414367 6042 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 16:59:10.414628 6042 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 16:59:10.414655 6042 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 16:59:10.414676 6042 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 16:59:10.414682 6042 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 16:59:10.414693 6042 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 16:59:10.414698 6042 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 16:59:10.414726 6042 factory.go:656] Stopping watch factory\\\\nI0126 16:59:10.414742 6042 ovnkube.go:599] Stopped ovnkube\\\\nI0126 16:59:10.414752 6042 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 16:59:10.414783 6042 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 16:59:10.414794 6042 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 16:59:10.414800 6042 handler.go:208] Removed *v1.Node event han\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c166114ee4e41f0a7e4b0590da090e98c319ef6eda0b9611419dfc55ceb139c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:59:12Z\\\",\\\"message\\\":\\\"etworkPolicy event handler 4 for removal\\\\nI0126 16:59:12.299079 6308 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 16:59:12.299145 6308 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 16:59:12.299187 6308 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 16:59:12.299270 6308 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 16:59:12.299298 6308 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 16:59:12.299298 6308 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 16:59:12.299306 6308 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 16:59:12.299340 6308 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 16:59:12.299299 6308 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 16:59:12.299385 6308 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 16:59:12.312081 6308 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 16:59:12.312141 6308 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 16:59:12.312158 6308 factory.go:656] Stopping watch factory\\\\nI0126 16:59:12.312172 6308 ovnkube.go:599] Stopped ovnkube\\\\nI0126 16:59:12.312201 6308 handler.go:208] Removed *v1.Node event handler 2\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:13Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.396694 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.396745 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.396756 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.396771 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.396782 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:13Z","lastTransitionTime":"2026-01-26T16:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.398923 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-295wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e50462-28e6-4531-ada4-e652310e6cce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-295wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:13Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.415655 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63c75ede-5170-4db0-811b-5217ef8d72b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da26ddcfc6ccde3c9aabd63bdef3435f9a9eaab8c095bfeef5670c1295576cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xm9cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:13Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.469224 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 06:14:20.520559591 +0000 UTC Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.499188 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.499224 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.499234 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.499250 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.499263 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:13Z","lastTransitionTime":"2026-01-26T16:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.601309 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.601348 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.601361 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.601378 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.601389 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:13Z","lastTransitionTime":"2026-01-26T16:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.704911 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.704944 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.704954 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.704967 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.704978 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:13Z","lastTransitionTime":"2026-01-26T16:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.720196 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.777094 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af4930ff9cebd62106545fb3da2dfc93f7b591426fe4c85aa2e637b60c935f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:13Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.779388 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.791868 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c09748c0963bdb50c9552f249d1135ea53e68c52babd788dad050951cb849cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:13Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.803445 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:13Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.807044 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.807099 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.807112 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.807128 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.807138 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:13Z","lastTransitionTime":"2026-01-26T16:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.817008 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77e85f9-b566-4807-bb92-55963c97b93c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba87c9fc35c230bbee201a5176cb467309f0b9aee82dfc81f3b677a15486d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03dc794e9c2035f2e1983eacad3e51d76223cb1b82e2f402c73f9453e4bd2f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-v7579\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:13Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.830049 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59ecd87a-c5db-446d-ad3e-cfabbd648c1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89975b8f9428f81ab5d3fb48ced5dd9c837bea2feea3b89f5f7ff8d7d5d15b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40d432159a07ba20bf95f058bf8a597d67cbd8d852519bed035694f8ba3d8ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ba0f131eee14df389391e46bc772894e49e976c3663df5fb3bc98b3cb4a3d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03a01b651d9f66da4dd1f6e0d29ad97c0d6ae46b644c3d997d8dc99476706df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f07438e20bdf71c752bb661084c835341999c561bb5442d75e177223881276f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T16:58:51Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 16:58:43.431198 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 16:58:43.432227 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-99450334/tls.crt::/tmp/serving-cert-99450334/tls.key\\\\\\\"\\\\nI0126 16:58:50.828320 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 16:58:50.830449 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 16:58:50.830472 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 16:58:50.830509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 16:58:50.830515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 16:58:50.834885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 16:58:50.834958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 16:58:50.834991 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 16:58:50.834905 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 16:58:50.835015 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 16:58:50.835084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 16:58:50.835106 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 16:58:50.835116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 16:58:50.837641 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2daa3f13d37b7ae19dfca406b6b5cfdcd4f211287f7d410193f2fee36a24553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:13Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.847966 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191ad5d41024c88c1a4fbc30f307dd6340fd55b93b00d22f62209d0e82be286f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf2a4d8be409a46ffe5702797e56a80023a72a19fb7dc5c49e5f4984cedc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:13Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.859969 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c823748751e9938f10fab08e33b5fffff5a6d15961ce028204131b7b69a56c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:13Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.871947 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:13Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.882058 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:13Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.894242 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq622" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a742e7b-c420-46e3-9e96-e9c744af6124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8plh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq622\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:13Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.907511 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:13Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.908887 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.908944 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.908956 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.908977 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.908990 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:13Z","lastTransitionTime":"2026-01-26T16:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.924582 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad7b59f9-beb7-49d6-a2d1-e29133e46854\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc53a6a6d4222ee5e1ac29fd5957d8cdf3fd42de72c68c85329374ce7afc4004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v2l7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:13Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.942418 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c166114ee4e41f0a7e4b0590da090e98c319ef6eda0b9611419dfc55ceb139c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e1e2c0e79e963e1ad9b28e9b6e7d69c6b6df040359eef2630b4aeb32109f38\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"ector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 16:59:10.414325 6042 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 16:59:10.414367 6042 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 16:59:10.414628 6042 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 16:59:10.414655 6042 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 16:59:10.414676 6042 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 16:59:10.414682 6042 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 16:59:10.414693 6042 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 16:59:10.414698 6042 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 16:59:10.414726 6042 factory.go:656] Stopping watch factory\\\\nI0126 16:59:10.414742 6042 ovnkube.go:599] Stopped ovnkube\\\\nI0126 16:59:10.414752 6042 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 16:59:10.414783 6042 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 16:59:10.414794 6042 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 16:59:10.414800 6042 handler.go:208] Removed *v1.Node event han\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c166114ee4e41f0a7e4b0590da090e98c319ef6eda0b9611419dfc55ceb139c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:59:12Z\\\",\\\"message\\\":\\\"etworkPolicy event handler 4 for removal\\\\nI0126 16:59:12.299079 6308 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 16:59:12.299145 6308 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 16:59:12.299187 6308 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 16:59:12.299270 6308 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 16:59:12.299298 6308 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 16:59:12.299298 6308 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 16:59:12.299306 6308 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 16:59:12.299340 6308 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 16:59:12.299299 6308 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 16:59:12.299385 6308 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 16:59:12.312081 6308 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 16:59:12.312141 6308 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 16:59:12.312158 6308 factory.go:656] Stopping watch factory\\\\nI0126 16:59:12.312172 6308 ovnkube.go:599] Stopped ovnkube\\\\nI0126 16:59:12.312201 6308 handler.go:208] Removed *v1.Node event handler 2\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:13Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.953859 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-295wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e50462-28e6-4531-ada4-e652310e6cce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-295wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:13Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:13 crc kubenswrapper[4856]: I0126 16:59:13.969060 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63c75ede-5170-4db0-811b-5217ef8d72b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da26ddcfc6ccde3c9aabd63bdef3435f9a9eaab8c095bfeef5670c1295576cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xm9cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:13Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.011191 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.011231 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.011252 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.011267 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.011275 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:14Z","lastTransitionTime":"2026-01-26T16:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.079274 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12e50462-28e6-4531-ada4-e652310e6cce-metrics-certs\") pod \"network-metrics-daemon-295wr\" (UID: \"12e50462-28e6-4531-ada4-e652310e6cce\") " pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 16:59:14 crc kubenswrapper[4856]: E0126 16:59:14.079482 4856 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 16:59:14 crc kubenswrapper[4856]: E0126 16:59:14.079604 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12e50462-28e6-4531-ada4-e652310e6cce-metrics-certs podName:12e50462-28e6-4531-ada4-e652310e6cce nodeName:}" failed. No retries permitted until 2026-01-26 16:59:18.079584776 +0000 UTC m=+54.032838757 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/12e50462-28e6-4531-ada4-e652310e6cce-metrics-certs") pod "network-metrics-daemon-295wr" (UID: "12e50462-28e6-4531-ada4-e652310e6cce") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.113973 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.114012 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.114024 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.114056 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.114066 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:14Z","lastTransitionTime":"2026-01-26T16:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.197841 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pxh94_ab5b6f50-172b-4535-a0f9-5d103bcab4e7/ovnkube-controller/1.log" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.201204 4856 scope.go:117] "RemoveContainer" containerID="7c166114ee4e41f0a7e4b0590da090e98c319ef6eda0b9611419dfc55ceb139c" Jan 26 16:59:14 crc kubenswrapper[4856]: E0126 16:59:14.201369 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pxh94_openshift-ovn-kubernetes(ab5b6f50-172b-4535-a0f9-5d103bcab4e7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.212785 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc864b0d-83bc-4954-9c61-ad650157caff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbde934a6c8acad10ca3ab8206d0ddbd4f7b17e9d304b898a68f4d3b0303bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b48f763d4aff37169399be766d5ab4f7ebbf91f304d139c9022a8556946eb107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0094e662f53c4832a984e05a880021af05ffc4c27f25394c28a070d9ef5490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb95f623fa4f5f42217649ccce4225f9fe588bb3558da9334eef2b40ddb62486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb95f623fa4f5f42217649ccce4225f9fe588bb3558da9334eef2b40ddb62486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.216100 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.216127 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.216135 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.216152 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.216162 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:14Z","lastTransitionTime":"2026-01-26T16:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.224753 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63c75ede-5170-4db0-811b-5217ef8d72b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da26ddcfc6ccde3c9aabd63bdef3435f9a9eaab8c095bfeef5670c1295576cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xm9cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.235207 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af4930ff9cebd62106545fb3da2dfc93f7b591426fe4c85aa2e637b60c935f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.248193 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c09748c0963bdb50c9552f249d1135ea53e68c52babd788dad050951cb849cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.258909 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.269642 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77e85f9-b566-4807-bb92-55963c97b93c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba87c9fc35c230bbee201a5176cb467309f0b9aee82dfc81f3b677a15486d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03dc794e9c2035f2e1983eacad3e51d76223cb1b82e2f402c73f9453e4bd2f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-v7579\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.278908 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.283453 4856 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.294391 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq622" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a742e7b-c420-46e3-9e96-e9c744af6124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8plh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq622\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.311471 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59ecd87a-c5db-446d-ad3e-cfabbd648c1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89975b8f9428f81ab5d3fb48ced5dd9c837bea2feea3b89f5f7ff8d7d5d15b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40d432159a07ba20bf95f058bf8a597d67cbd8d852519bed035694f8ba3d8ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ba0f131eee14df389391e46bc772894e49e976c3663df5fb3bc98b3cb4a3d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03a01b651d9f66da4dd1f6e0d29ad97c0d6ae46b644c3d997d8dc99476706df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f07438e20bdf71c752bb661084c835341999c561bb5442d75e177223881276f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T16:58:51Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 16:58:43.431198 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 16:58:43.432227 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-99450334/tls.crt::/tmp/serving-cert-99450334/tls.key\\\\\\\"\\\\nI0126 16:58:50.828320 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 16:58:50.830449 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 16:58:50.830472 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 16:58:50.830509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 16:58:50.830515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 16:58:50.834885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 16:58:50.834958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 16:58:50.834991 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 16:58:50.834905 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 16:58:50.835015 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 16:58:50.835084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 16:58:50.835106 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 16:58:50.835116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 16:58:50.837641 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2daa3f13d37b7ae19dfca406b6b5cfdcd4f211287f7d410193f2fee36a24553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.319065 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.319118 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.319128 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.319142 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.319152 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:14Z","lastTransitionTime":"2026-01-26T16:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.324427 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191ad5d41024c88c1a4fbc30f307dd6340fd55b93b00d22f62209d0e82be286f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf2a4d8be409a46ffe5702797e56a80023a72a19fb7dc5c49e5f4984cedc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.338080 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c823748751e9938f10fab08e33b5fffff5a6d15961ce028204131b7b69a56c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.351285 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.362425 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.376174 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad7b59f9-beb7-49d6-a2d1-e29133e46854\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc53a6a6d4222ee5e1ac29fd5957d8cdf3fd42de72c68c85329374ce7afc4004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v2l7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.394926 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:59:14 crc kubenswrapper[4856]: E0126 16:59:14.395053 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.394936 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.394950 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:59:14 crc kubenswrapper[4856]: E0126 16:59:14.395215 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.394933 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:59:14 crc kubenswrapper[4856]: E0126 16:59:14.395313 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:59:14 crc kubenswrapper[4856]: E0126 16:59:14.395252 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.399955 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c166114ee4e41f0a7e4b0590da090e98c319ef6eda0b9611419dfc55ceb139c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c166114ee4e41f0a7e4b0590da090e98c319ef6eda0b9611419dfc55ceb139c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:59:12Z\\\",\\\"message\\\":\\\"etworkPolicy event handler 4 for removal\\\\nI0126 16:59:12.299079 6308 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 16:59:12.299145 6308 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 16:59:12.299187 6308 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 16:59:12.299270 6308 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 16:59:12.299298 6308 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 16:59:12.299298 6308 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 16:59:12.299306 6308 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 16:59:12.299340 6308 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 16:59:12.299299 6308 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 16:59:12.299385 6308 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 16:59:12.312081 6308 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 16:59:12.312141 6308 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 16:59:12.312158 6308 factory.go:656] Stopping watch factory\\\\nI0126 16:59:12.312172 6308 ovnkube.go:599] Stopped ovnkube\\\\nI0126 16:59:12.312201 6308 handler.go:208] Removed *v1.Node event handler 2\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pxh94_openshift-ovn-kubernetes(ab5b6f50-172b-4535-a0f9-5d103bcab4e7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.410638 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-295wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e50462-28e6-4531-ada4-e652310e6cce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-295wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:14Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.421495 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.421535 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.421548 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.421565 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.421575 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:14Z","lastTransitionTime":"2026-01-26T16:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.470379 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 03:51:58.276263714 +0000 UTC Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.523888 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.523924 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.523935 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.523950 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.523960 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:14Z","lastTransitionTime":"2026-01-26T16:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.626379 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.626428 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.626441 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.626458 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.626474 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:14Z","lastTransitionTime":"2026-01-26T16:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.729008 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.729062 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.729075 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.729095 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.729108 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:14Z","lastTransitionTime":"2026-01-26T16:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.831574 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.831608 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.831625 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.831640 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.831650 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:14Z","lastTransitionTime":"2026-01-26T16:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.934096 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.934146 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.934159 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.934177 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:14 crc kubenswrapper[4856]: I0126 16:59:14.934189 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:14Z","lastTransitionTime":"2026-01-26T16:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.037599 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.037657 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.037673 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.037696 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.037711 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:15Z","lastTransitionTime":"2026-01-26T16:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.140445 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.140483 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.140492 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.140506 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.140552 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:15Z","lastTransitionTime":"2026-01-26T16:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.243041 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.243087 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.243099 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.243116 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.243128 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:15Z","lastTransitionTime":"2026-01-26T16:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.345669 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.345712 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.345727 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.345753 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.345768 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:15Z","lastTransitionTime":"2026-01-26T16:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.408811 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.419913 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.437160 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq622" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a742e7b-c420-46e3-9e96-e9c744af6124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8plh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq622\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.449363 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.449410 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.449425 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.449448 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.449470 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:15Z","lastTransitionTime":"2026-01-26T16:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.453627 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59ecd87a-c5db-446d-ad3e-cfabbd648c1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89975b8f9428f81ab5d3fb48ced5dd9c837bea2feea3b89f5f7ff8d7d5d15b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40d432159a07ba20bf95f058bf8a597d67cbd8d852519bed035694f8ba3d8ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ba0f131eee14df389391e46bc772894e49e976c3663df5fb3bc98b3cb4a3d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03a01b651d9f66da4dd1f6e0d29ad97c0d6ae46b644c3d997d8dc99476706df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f07438e20bdf71c752bb661084c835341999c561bb5442d75e177223881276f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T16:58:51Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 16:58:43.431198 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 16:58:43.432227 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-99450334/tls.crt::/tmp/serving-cert-99450334/tls.key\\\\\\\"\\\\nI0126 16:58:50.828320 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 16:58:50.830449 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 16:58:50.830472 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 16:58:50.830509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 16:58:50.830515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 16:58:50.834885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 16:58:50.834958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 16:58:50.834991 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 16:58:50.834905 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 16:58:50.835015 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 16:58:50.835084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 16:58:50.835106 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 16:58:50.835116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 16:58:50.837641 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2daa3f13d37b7ae19dfca406b6b5cfdcd4f211287f7d410193f2fee36a24553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.470302 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191ad5d41024c88c1a4fbc30f307dd6340fd55b93b00d22f62209d0e82be286f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf2a4d8be409a46ffe5702797e56a80023a72a19fb7dc5c49e5f4984cedc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.470664 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 22:45:20.21076415 +0000 UTC Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.484835 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c823748751e9938f10fab08e33b5fffff5a6d15961ce028204131b7b69a56c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.498127 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.516293 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad7b59f9-beb7-49d6-a2d1-e29133e46854\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc53a6a6d4222ee5e1ac29fd5957d8cdf3fd42de72c68c85329374ce7afc4004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v2l7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.534065 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c166114ee4e41f0a7e4b0590da090e98c319ef6eda0b9611419dfc55ceb139c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c166114ee4e41f0a7e4b0590da090e98c319ef6eda0b9611419dfc55ceb139c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:59:12Z\\\",\\\"message\\\":\\\"etworkPolicy event handler 4 for removal\\\\nI0126 16:59:12.299079 6308 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 16:59:12.299145 6308 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 16:59:12.299187 6308 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 16:59:12.299270 6308 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 16:59:12.299298 6308 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 16:59:12.299298 6308 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 16:59:12.299306 6308 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 16:59:12.299340 6308 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 16:59:12.299299 6308 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 16:59:12.299385 6308 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 16:59:12.312081 6308 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 16:59:12.312141 6308 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 16:59:12.312158 6308 factory.go:656] Stopping watch factory\\\\nI0126 16:59:12.312172 6308 ovnkube.go:599] Stopped ovnkube\\\\nI0126 16:59:12.312201 6308 handler.go:208] Removed *v1.Node event handler 2\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pxh94_openshift-ovn-kubernetes(ab5b6f50-172b-4535-a0f9-5d103bcab4e7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.546505 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-295wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e50462-28e6-4531-ada4-e652310e6cce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-295wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.551639 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.551671 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.551681 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.551716 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.551731 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:15Z","lastTransitionTime":"2026-01-26T16:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.559709 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc864b0d-83bc-4954-9c61-ad650157caff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbde934a6c8acad10ca3ab8206d0ddbd4f7b17e9d304b898a68f4d3b0303bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b48f763d4aff37169399be766d5ab4f7ebbf91f304d139c9022a8556946eb107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0094e662f53c4832a984e05a880021af05ffc4c27f25394c28a070d9ef5490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb95f623fa4f5f42217649ccce4225f9fe588bb3558da9334eef2b40ddb62486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb95f623fa4f5f42217649ccce4225f9fe588bb3558da9334eef2b40ddb62486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.577017 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63c75ede-5170-4db0-811b-5217ef8d72b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da26ddcfc6ccde3c9aabd63bdef3435f9a9eaab8c095bfeef5670c1295576cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xm9cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.587506 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77e85f9-b566-4807-bb92-55963c97b93c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba87c9fc35c230bbee201a5176cb467309f0b9aee82dfc81f3b677a15486d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03dc794e9c2035f2e1983eacad3e51d76223cb1b82e2f402c73f9453e4bd2f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-v7579\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.597934 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af4930ff9cebd62106545fb3da2dfc93f7b591426fe4c85aa2e637b60c935f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.610120 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c09748c0963bdb50c9552f249d1135ea53e68c52babd788dad050951cb849cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.621908 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:15Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.653857 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.653895 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.653903 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.653916 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.653925 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:15Z","lastTransitionTime":"2026-01-26T16:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.757097 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.757152 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.757168 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.757191 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.757207 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:15Z","lastTransitionTime":"2026-01-26T16:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.859461 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.859502 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.859510 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.859549 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.859562 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:15Z","lastTransitionTime":"2026-01-26T16:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.962381 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.962696 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.962799 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.962900 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:15 crc kubenswrapper[4856]: I0126 16:59:15.962999 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:15Z","lastTransitionTime":"2026-01-26T16:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.065897 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.065936 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.065944 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.065956 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.065965 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:16Z","lastTransitionTime":"2026-01-26T16:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.168978 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.169361 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.169715 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.169964 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.170193 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:16Z","lastTransitionTime":"2026-01-26T16:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.272844 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.272879 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.272892 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.272911 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.272924 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:16Z","lastTransitionTime":"2026-01-26T16:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.376458 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.376497 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.376509 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.376564 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.376590 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:16Z","lastTransitionTime":"2026-01-26T16:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.395060 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.395090 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:59:16 crc kubenswrapper[4856]: E0126 16:59:16.395181 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:59:16 crc kubenswrapper[4856]: E0126 16:59:16.395295 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.395402 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:59:16 crc kubenswrapper[4856]: E0126 16:59:16.395574 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.395697 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 16:59:16 crc kubenswrapper[4856]: E0126 16:59:16.395822 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.471043 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 07:46:13.101827915 +0000 UTC Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.479345 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.479406 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.479417 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.479443 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.479460 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:16Z","lastTransitionTime":"2026-01-26T16:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.583101 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.583188 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.583363 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.583433 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.583457 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:16Z","lastTransitionTime":"2026-01-26T16:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.686682 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.686753 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.686771 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.686796 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.686813 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:16Z","lastTransitionTime":"2026-01-26T16:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.790485 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.790643 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.790689 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.790734 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.790756 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:16Z","lastTransitionTime":"2026-01-26T16:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.893784 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.893833 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.893847 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.893873 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.893887 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:16Z","lastTransitionTime":"2026-01-26T16:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.996740 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.996804 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.996828 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.996860 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:16 crc kubenswrapper[4856]: I0126 16:59:16.996882 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:16Z","lastTransitionTime":"2026-01-26T16:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.100635 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.100693 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.100708 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.100728 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.100745 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:17Z","lastTransitionTime":"2026-01-26T16:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.204843 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.204910 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.205006 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.205036 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.205052 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:17Z","lastTransitionTime":"2026-01-26T16:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.308334 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.308393 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.308409 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.308431 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.308453 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:17Z","lastTransitionTime":"2026-01-26T16:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.411439 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.411483 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.411500 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.411517 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.411580 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:17Z","lastTransitionTime":"2026-01-26T16:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.472032 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 19:59:29.617704909 +0000 UTC Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.514569 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.514620 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.514634 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.514659 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.514674 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:17Z","lastTransitionTime":"2026-01-26T16:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.617181 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.617235 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.617249 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.617269 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.617283 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:17Z","lastTransitionTime":"2026-01-26T16:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.720505 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.720611 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.720635 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.720668 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.720691 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:17Z","lastTransitionTime":"2026-01-26T16:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.823409 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.823498 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.823508 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.823565 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.823580 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:17Z","lastTransitionTime":"2026-01-26T16:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.926430 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.926482 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.926500 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.926553 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:17 crc kubenswrapper[4856]: I0126 16:59:17.926569 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:17Z","lastTransitionTime":"2026-01-26T16:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.030275 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.030385 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.030398 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.030430 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.030445 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:18Z","lastTransitionTime":"2026-01-26T16:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.114030 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12e50462-28e6-4531-ada4-e652310e6cce-metrics-certs\") pod \"network-metrics-daemon-295wr\" (UID: \"12e50462-28e6-4531-ada4-e652310e6cce\") " pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 16:59:18 crc kubenswrapper[4856]: E0126 16:59:18.114310 4856 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 16:59:18 crc kubenswrapper[4856]: E0126 16:59:18.114414 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12e50462-28e6-4531-ada4-e652310e6cce-metrics-certs podName:12e50462-28e6-4531-ada4-e652310e6cce nodeName:}" failed. No retries permitted until 2026-01-26 16:59:26.114385653 +0000 UTC m=+62.067639684 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/12e50462-28e6-4531-ada4-e652310e6cce-metrics-certs") pod "network-metrics-daemon-295wr" (UID: "12e50462-28e6-4531-ada4-e652310e6cce") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.132908 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.132955 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.132974 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.132991 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.133002 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:18Z","lastTransitionTime":"2026-01-26T16:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.236616 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.236684 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.236706 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.236732 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.236748 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:18Z","lastTransitionTime":"2026-01-26T16:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.341476 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.341593 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.341622 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.341668 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.341700 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:18Z","lastTransitionTime":"2026-01-26T16:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.394352 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.394393 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:59:18 crc kubenswrapper[4856]: E0126 16:59:18.394647 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:59:18 crc kubenswrapper[4856]: E0126 16:59:18.394786 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.394399 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.395044 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:59:18 crc kubenswrapper[4856]: E0126 16:59:18.395467 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 16:59:18 crc kubenswrapper[4856]: E0126 16:59:18.396003 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.444807 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.444863 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.444875 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.444890 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.444900 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:18Z","lastTransitionTime":"2026-01-26T16:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.472522 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 15:18:20.671581864 +0000 UTC Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.548944 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.549012 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.549034 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.549064 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.549087 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:18Z","lastTransitionTime":"2026-01-26T16:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.652339 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.652409 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.652438 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.652460 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.652475 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:18Z","lastTransitionTime":"2026-01-26T16:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.656767 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.656839 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.656864 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.656892 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.656919 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:18Z","lastTransitionTime":"2026-01-26T16:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:18 crc kubenswrapper[4856]: E0126 16:59:18.677511 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17523591-a778-4a97-aeab-8a7a93101850\\\",\\\"systemUUID\\\":\\\"ca45d056-99cb-4442-8a44-7e899628ecb2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:18Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.683713 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.683773 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.683793 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.683821 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.683846 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:18Z","lastTransitionTime":"2026-01-26T16:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:18 crc kubenswrapper[4856]: E0126 16:59:18.699704 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17523591-a778-4a97-aeab-8a7a93101850\\\",\\\"systemUUID\\\":\\\"ca45d056-99cb-4442-8a44-7e899628ecb2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:18Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.704497 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.704574 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.704587 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.704606 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.704617 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:18Z","lastTransitionTime":"2026-01-26T16:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:18 crc kubenswrapper[4856]: E0126 16:59:18.717231 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17523591-a778-4a97-aeab-8a7a93101850\\\",\\\"systemUUID\\\":\\\"ca45d056-99cb-4442-8a44-7e899628ecb2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:18Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.723208 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.723268 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.723283 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.723358 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.723378 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:18Z","lastTransitionTime":"2026-01-26T16:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:18 crc kubenswrapper[4856]: E0126 16:59:18.742506 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17523591-a778-4a97-aeab-8a7a93101850\\\",\\\"systemUUID\\\":\\\"ca45d056-99cb-4442-8a44-7e899628ecb2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:18Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.747988 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.748266 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.748385 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.748550 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.748687 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:18Z","lastTransitionTime":"2026-01-26T16:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:18 crc kubenswrapper[4856]: E0126 16:59:18.765308 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17523591-a778-4a97-aeab-8a7a93101850\\\",\\\"systemUUID\\\":\\\"ca45d056-99cb-4442-8a44-7e899628ecb2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:18Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:18 crc kubenswrapper[4856]: E0126 16:59:18.765698 4856 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.768753 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.769004 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.769160 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.769294 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.769429 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:18Z","lastTransitionTime":"2026-01-26T16:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.873905 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.874303 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.874482 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.874682 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.874872 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:18Z","lastTransitionTime":"2026-01-26T16:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.977636 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.977907 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.977993 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.978074 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:18 crc kubenswrapper[4856]: I0126 16:59:18.978191 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:18Z","lastTransitionTime":"2026-01-26T16:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.081355 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.081410 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.081427 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.081446 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.081456 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:19Z","lastTransitionTime":"2026-01-26T16:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.184613 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.184702 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.184714 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.184740 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.184753 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:19Z","lastTransitionTime":"2026-01-26T16:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.287814 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.287888 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.287905 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.287930 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.287944 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:19Z","lastTransitionTime":"2026-01-26T16:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.391467 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.391775 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.391793 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.391819 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.391843 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:19Z","lastTransitionTime":"2026-01-26T16:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.473759 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 01:23:04.518023633 +0000 UTC Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.495579 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.495621 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.495645 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.495663 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.495675 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:19Z","lastTransitionTime":"2026-01-26T16:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.597470 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.597554 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.597566 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.597582 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.597593 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:19Z","lastTransitionTime":"2026-01-26T16:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.700890 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.700944 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.700957 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.700983 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.700995 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:19Z","lastTransitionTime":"2026-01-26T16:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.803378 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.803431 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.803443 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.803464 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.803477 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:19Z","lastTransitionTime":"2026-01-26T16:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.906854 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.906929 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.906943 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.906965 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:19 crc kubenswrapper[4856]: I0126 16:59:19.906978 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:19Z","lastTransitionTime":"2026-01-26T16:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.010167 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.010239 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.010253 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.010276 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.010292 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:20Z","lastTransitionTime":"2026-01-26T16:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.114231 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.114312 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.114333 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.114362 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.114378 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:20Z","lastTransitionTime":"2026-01-26T16:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.217982 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.218028 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.218039 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.218058 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.218070 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:20Z","lastTransitionTime":"2026-01-26T16:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.320369 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.320416 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.320432 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.320448 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.320460 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:20Z","lastTransitionTime":"2026-01-26T16:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.394163 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.394163 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.394282 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.394304 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 16:59:20 crc kubenswrapper[4856]: E0126 16:59:20.394632 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:59:20 crc kubenswrapper[4856]: E0126 16:59:20.394780 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 16:59:20 crc kubenswrapper[4856]: E0126 16:59:20.394927 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:59:20 crc kubenswrapper[4856]: E0126 16:59:20.395133 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.429946 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.429998 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.430006 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.430019 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.430028 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:20Z","lastTransitionTime":"2026-01-26T16:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.474238 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 15:09:37.534428491 +0000 UTC Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.532490 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.532558 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.532568 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.532585 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.532594 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:20Z","lastTransitionTime":"2026-01-26T16:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.635249 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.635288 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.635301 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.635317 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.635327 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:20Z","lastTransitionTime":"2026-01-26T16:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.738642 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.738709 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.738747 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.738780 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.738888 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:20Z","lastTransitionTime":"2026-01-26T16:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.842355 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.842418 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.842437 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.842463 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.842481 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:20Z","lastTransitionTime":"2026-01-26T16:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.945242 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.945345 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.945368 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.945391 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:20 crc kubenswrapper[4856]: I0126 16:59:20.945409 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:20Z","lastTransitionTime":"2026-01-26T16:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.048026 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.048072 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.048092 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.048109 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.048126 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:21Z","lastTransitionTime":"2026-01-26T16:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.150441 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.150504 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.150515 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.150551 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.150561 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:21Z","lastTransitionTime":"2026-01-26T16:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.253702 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.253747 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.253763 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.254003 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.254029 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:21Z","lastTransitionTime":"2026-01-26T16:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.357518 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.357624 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.357647 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.357693 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.357718 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:21Z","lastTransitionTime":"2026-01-26T16:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.460777 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.460828 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.460844 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.460862 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.460873 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:21Z","lastTransitionTime":"2026-01-26T16:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.474993 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 16:29:44.323268258 +0000 UTC Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.563337 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.563367 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.563377 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.563392 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.563402 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:21Z","lastTransitionTime":"2026-01-26T16:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.666657 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.666713 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.666728 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.666749 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.666765 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:21Z","lastTransitionTime":"2026-01-26T16:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.769667 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.769706 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.769717 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.769733 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.769758 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:21Z","lastTransitionTime":"2026-01-26T16:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.872646 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.872771 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.872793 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.872821 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.872841 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:21Z","lastTransitionTime":"2026-01-26T16:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.975058 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.975124 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.975150 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.975180 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:21 crc kubenswrapper[4856]: I0126 16:59:21.975205 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:21Z","lastTransitionTime":"2026-01-26T16:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.078412 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.078449 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.078459 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.078488 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.078497 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:22Z","lastTransitionTime":"2026-01-26T16:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.180842 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.180895 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.180907 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.180926 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.180937 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:22Z","lastTransitionTime":"2026-01-26T16:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.282801 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.282934 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.282953 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.282971 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.282981 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:22Z","lastTransitionTime":"2026-01-26T16:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.385491 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.385604 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.385634 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.385664 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.385733 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:22Z","lastTransitionTime":"2026-01-26T16:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.394764 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.394809 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.394824 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.394771 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:59:22 crc kubenswrapper[4856]: E0126 16:59:22.394940 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:59:22 crc kubenswrapper[4856]: E0126 16:59:22.395073 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 16:59:22 crc kubenswrapper[4856]: E0126 16:59:22.395182 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:59:22 crc kubenswrapper[4856]: E0126 16:59:22.395347 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.475182 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 20:16:05.570387367 +0000 UTC Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.488321 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.488408 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.488418 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.488434 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.488451 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:22Z","lastTransitionTime":"2026-01-26T16:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.591768 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.591868 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.591879 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.591918 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.591929 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:22Z","lastTransitionTime":"2026-01-26T16:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.694761 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.694839 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.694855 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.694885 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.694902 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:22Z","lastTransitionTime":"2026-01-26T16:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.798495 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.798559 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.798573 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.798665 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.798683 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:22Z","lastTransitionTime":"2026-01-26T16:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.901577 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.901832 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.901855 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.901883 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:22 crc kubenswrapper[4856]: I0126 16:59:22.901905 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:22Z","lastTransitionTime":"2026-01-26T16:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.006119 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.006182 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.006198 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.006220 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.006236 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:23Z","lastTransitionTime":"2026-01-26T16:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.109299 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.109351 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.109359 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.109375 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.109385 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:23Z","lastTransitionTime":"2026-01-26T16:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.279254 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.279280 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.279288 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.279301 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.279311 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:23Z","lastTransitionTime":"2026-01-26T16:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.381546 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.381592 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.381600 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.381615 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.381624 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:23Z","lastTransitionTime":"2026-01-26T16:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.476083 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 17:31:29.554584671 +0000 UTC Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.484233 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.484267 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.484276 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.484290 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.484300 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:23Z","lastTransitionTime":"2026-01-26T16:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.587782 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.587820 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.587827 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.587848 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.587861 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:23Z","lastTransitionTime":"2026-01-26T16:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.692370 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.692431 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.692454 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.692487 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.692508 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:23Z","lastTransitionTime":"2026-01-26T16:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.795145 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.796063 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.796175 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.796730 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.796814 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.797208 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:23Z","lastTransitionTime":"2026-01-26T16:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.817378 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77e85f9-b566-4807-bb92-55963c97b93c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba87c9fc35c230bbee201a5176cb467309f0b9aee82dfc81f3b677a15486d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03dc794e9c2035f2e1983eacad3e51d76223cb1b82e2f402c73f9453e4bd2f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-v7579\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:23Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.832845 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af4930ff9cebd62106545fb3da2dfc93f7b591426fe4c85aa2e637b60c935f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:23Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.853990 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c09748c0963bdb50c9552f249d1135ea53e68c52babd788dad050951cb849cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:23Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.876514 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:23Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.889702 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:23Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.899624 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.899660 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.899668 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.899683 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.899692 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:23Z","lastTransitionTime":"2026-01-26T16:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.901317 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:23Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.915290 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq622" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a742e7b-c420-46e3-9e96-e9c744af6124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8plh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq622\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:23Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.929451 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59ecd87a-c5db-446d-ad3e-cfabbd648c1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89975b8f9428f81ab5d3fb48ced5dd9c837bea2feea3b89f5f7ff8d7d5d15b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40d432159a07ba20bf95f058bf8a597d67cbd8d852519bed035694f8ba3d8ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ba0f131eee14df389391e46bc772894e49e976c3663df5fb3bc98b3cb4a3d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03a01b651d9f66da4dd1f6e0d29ad97c0d6ae46b644c3d997d8dc99476706df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f07438e20bdf71c752bb661084c835341999c561bb5442d75e177223881276f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T16:58:51Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 16:58:43.431198 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 16:58:43.432227 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-99450334/tls.crt::/tmp/serving-cert-99450334/tls.key\\\\\\\"\\\\nI0126 16:58:50.828320 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 16:58:50.830449 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 16:58:50.830472 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 16:58:50.830509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 16:58:50.830515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 16:58:50.834885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 16:58:50.834958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 16:58:50.834991 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 16:58:50.834905 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 16:58:50.835015 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 16:58:50.835084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 16:58:50.835106 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 16:58:50.835116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 16:58:50.837641 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2daa3f13d37b7ae19dfca406b6b5cfdcd4f211287f7d410193f2fee36a24553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:23Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.942941 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191ad5d41024c88c1a4fbc30f307dd6340fd55b93b00d22f62209d0e82be286f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf2a4d8be409a46ffe5702797e56a80023a72a19fb7dc5c49e5f4984cedc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:23Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.955309 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c823748751e9938f10fab08e33b5fffff5a6d15961ce028204131b7b69a56c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:23Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:23 crc kubenswrapper[4856]: I0126 16:59:23.967745 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:23Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.003596 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.003667 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.003687 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.003712 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.003730 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:24Z","lastTransitionTime":"2026-01-26T16:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.015943 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad7b59f9-beb7-49d6-a2d1-e29133e46854\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc53a6a6d4222ee5e1ac29fd5957d8cdf3fd42de72c68c85329374ce7afc4004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v2l7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:24Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.051799 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c166114ee4e41f0a7e4b0590da090e98c319ef6eda0b9611419dfc55ceb139c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c166114ee4e41f0a7e4b0590da090e98c319ef6eda0b9611419dfc55ceb139c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:59:12Z\\\",\\\"message\\\":\\\"etworkPolicy event handler 4 for removal\\\\nI0126 16:59:12.299079 6308 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 16:59:12.299145 6308 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 16:59:12.299187 6308 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 16:59:12.299270 6308 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 16:59:12.299298 6308 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 16:59:12.299298 6308 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 16:59:12.299306 6308 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 16:59:12.299340 6308 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 16:59:12.299299 6308 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 16:59:12.299385 6308 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 16:59:12.312081 6308 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 16:59:12.312141 6308 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 16:59:12.312158 6308 factory.go:656] Stopping watch factory\\\\nI0126 16:59:12.312172 6308 ovnkube.go:599] Stopped ovnkube\\\\nI0126 16:59:12.312201 6308 handler.go:208] Removed *v1.Node event handler 2\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pxh94_openshift-ovn-kubernetes(ab5b6f50-172b-4535-a0f9-5d103bcab4e7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:24Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.066663 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-295wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e50462-28e6-4531-ada4-e652310e6cce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-295wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:24Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.077933 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc864b0d-83bc-4954-9c61-ad650157caff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbde934a6c8acad10ca3ab8206d0ddbd4f7b17e9d304b898a68f4d3b0303bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b48f763d4aff37169399be766d5ab4f7ebbf91f304d139c9022a8556946eb107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0094e662f53c4832a984e05a880021af05ffc4c27f25394c28a070d9ef5490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb95f623fa4f5f42217649ccce4225f9fe588bb3558da9334eef2b40ddb62486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb95f623fa4f5f42217649ccce4225f9fe588bb3558da9334eef2b40ddb62486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:24Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.088015 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63c75ede-5170-4db0-811b-5217ef8d72b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da26ddcfc6ccde3c9aabd63bdef3435f9a9eaab8c095bfeef5670c1295576cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xm9cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:24Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.105449 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.105497 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.105506 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.105536 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.105549 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:24Z","lastTransitionTime":"2026-01-26T16:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.209517 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.209620 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.209661 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.209697 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.209726 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:24Z","lastTransitionTime":"2026-01-26T16:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.312869 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.312954 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.312972 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.313002 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.313020 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:24Z","lastTransitionTime":"2026-01-26T16:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.395012 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.395087 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.395095 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:59:24 crc kubenswrapper[4856]: E0126 16:59:24.395251 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.395287 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 16:59:24 crc kubenswrapper[4856]: E0126 16:59:24.395441 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:59:24 crc kubenswrapper[4856]: E0126 16:59:24.395617 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 16:59:24 crc kubenswrapper[4856]: E0126 16:59:24.395753 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.415929 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.415997 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.416018 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.416040 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.416057 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:24Z","lastTransitionTime":"2026-01-26T16:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.476265 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 13:59:59.608268325 +0000 UTC Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.519209 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.519248 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.519259 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.519320 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.519335 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:24Z","lastTransitionTime":"2026-01-26T16:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.622111 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.622187 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.622205 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.622230 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.622249 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:24Z","lastTransitionTime":"2026-01-26T16:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.725413 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.725477 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.725496 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.725519 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.725567 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:24Z","lastTransitionTime":"2026-01-26T16:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.829567 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.829637 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.829656 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.829685 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.829703 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:24Z","lastTransitionTime":"2026-01-26T16:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.933043 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.933094 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.933106 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.933125 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:24 crc kubenswrapper[4856]: I0126 16:59:24.933144 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:24Z","lastTransitionTime":"2026-01-26T16:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.036730 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.036774 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.036790 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.036807 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.036819 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:25Z","lastTransitionTime":"2026-01-26T16:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.138965 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.139021 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.139033 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.139053 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.139106 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:25Z","lastTransitionTime":"2026-01-26T16:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.242222 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.242259 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.242269 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.242286 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.242297 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:25Z","lastTransitionTime":"2026-01-26T16:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.345072 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.345139 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.345161 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.345190 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.345213 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:25Z","lastTransitionTime":"2026-01-26T16:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.418840 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:25Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.437373 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad7b59f9-beb7-49d6-a2d1-e29133e46854\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc53a6a6d4222ee5e1ac29fd5957d8cdf3fd42de72c68c85329374ce7afc4004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v2l7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:25Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.448679 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.448762 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.448788 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.448821 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.448846 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:25Z","lastTransitionTime":"2026-01-26T16:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.464294 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c166114ee4e41f0a7e4b0590da090e98c319ef6eda0b9611419dfc55ceb139c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c166114ee4e41f0a7e4b0590da090e98c319ef6eda0b9611419dfc55ceb139c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:59:12Z\\\",\\\"message\\\":\\\"etworkPolicy event handler 4 for removal\\\\nI0126 16:59:12.299079 6308 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 16:59:12.299145 6308 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 16:59:12.299187 6308 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 16:59:12.299270 6308 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 16:59:12.299298 6308 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 16:59:12.299298 6308 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 16:59:12.299306 6308 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 16:59:12.299340 6308 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 16:59:12.299299 6308 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 16:59:12.299385 6308 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 16:59:12.312081 6308 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 16:59:12.312141 6308 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 16:59:12.312158 6308 factory.go:656] Stopping watch factory\\\\nI0126 16:59:12.312172 6308 ovnkube.go:599] Stopped ovnkube\\\\nI0126 16:59:12.312201 6308 handler.go:208] Removed *v1.Node event handler 2\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pxh94_openshift-ovn-kubernetes(ab5b6f50-172b-4535-a0f9-5d103bcab4e7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:25Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.476656 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 15:22:52.488465242 +0000 UTC Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.481271 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-295wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e50462-28e6-4531-ada4-e652310e6cce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-295wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:25Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.498848 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc864b0d-83bc-4954-9c61-ad650157caff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbde934a6c8acad10ca3ab8206d0ddbd4f7b17e9d304b898a68f4d3b0303bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b48f763d4aff37169399be766d5ab4f7ebbf91f304d139c9022a8556946eb107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0094e662f53c4832a984e05a880021af05ffc4c27f25394c28a070d9ef5490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb95f623fa4f5f42217649ccce4225f9fe588bb3558da9334eef2b40ddb62486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb95f623fa4f5f42217649ccce4225f9fe588bb3558da9334eef2b40ddb62486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:25Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.516065 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63c75ede-5170-4db0-811b-5217ef8d72b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da26ddcfc6ccde3c9aabd63bdef3435f9a9eaab8c095bfeef5670c1295576cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xm9cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:25Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.562234 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.562323 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.562341 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.562413 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.562447 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:25Z","lastTransitionTime":"2026-01-26T16:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.576195 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af4930ff9cebd62106545fb3da2dfc93f7b591426fe4c85aa2e637b60c935f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:25Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.599160 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c09748c0963bdb50c9552f249d1135ea53e68c52babd788dad050951cb849cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:25Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.612637 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:25Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.625252 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77e85f9-b566-4807-bb92-55963c97b93c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba87c9fc35c230bbee201a5176cb467309f0b9aee82dfc81f3b677a15486d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03dc794e9c2035f2e1983eacad3e51d76223cb1b82e2f402c73f9453e4bd2f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-v7579\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:25Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.639652 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59ecd87a-c5db-446d-ad3e-cfabbd648c1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89975b8f9428f81ab5d3fb48ced5dd9c837bea2feea3b89f5f7ff8d7d5d15b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40d432159a07ba20bf95f058bf8a597d67cbd8d852519bed035694f8ba3d8ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ba0f131eee14df389391e46bc772894e49e976c3663df5fb3bc98b3cb4a3d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03a01b651d9f66da4dd1f6e0d29ad97c0d6ae46b644c3d997d8dc99476706df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f07438e20bdf71c752bb661084c835341999c561bb5442d75e177223881276f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T16:58:51Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 16:58:43.431198 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 16:58:43.432227 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-99450334/tls.crt::/tmp/serving-cert-99450334/tls.key\\\\\\\"\\\\nI0126 16:58:50.828320 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 16:58:50.830449 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 16:58:50.830472 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 16:58:50.830509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 16:58:50.830515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 16:58:50.834885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 16:58:50.834958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 16:58:50.834991 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 16:58:50.834905 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 16:58:50.835015 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 16:58:50.835084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 16:58:50.835106 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 16:58:50.835116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 16:58:50.837641 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2daa3f13d37b7ae19dfca406b6b5cfdcd4f211287f7d410193f2fee36a24553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:25Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.652563 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191ad5d41024c88c1a4fbc30f307dd6340fd55b93b00d22f62209d0e82be286f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf2a4d8be409a46ffe5702797e56a80023a72a19fb7dc5c49e5f4984cedc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:25Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.664006 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c823748751e9938f10fab08e33b5fffff5a6d15961ce028204131b7b69a56c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:25Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.665864 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.665901 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.665912 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.665929 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.665941 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:25Z","lastTransitionTime":"2026-01-26T16:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.675005 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:25Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.685442 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:25Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.700247 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq622" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a742e7b-c420-46e3-9e96-e9c744af6124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8plh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq622\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:25Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.768676 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.768737 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.768754 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.768783 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.768808 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:25Z","lastTransitionTime":"2026-01-26T16:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.871577 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.871640 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.871659 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.871681 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.871698 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:25Z","lastTransitionTime":"2026-01-26T16:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.975590 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.975648 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.975666 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.975689 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:25 crc kubenswrapper[4856]: I0126 16:59:25.975703 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:25Z","lastTransitionTime":"2026-01-26T16:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.079667 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.079750 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.079775 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.079812 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.079838 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:26Z","lastTransitionTime":"2026-01-26T16:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.182956 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.182996 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.183010 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.183028 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.183043 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:26Z","lastTransitionTime":"2026-01-26T16:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.202139 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12e50462-28e6-4531-ada4-e652310e6cce-metrics-certs\") pod \"network-metrics-daemon-295wr\" (UID: \"12e50462-28e6-4531-ada4-e652310e6cce\") " pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 16:59:26 crc kubenswrapper[4856]: E0126 16:59:26.202499 4856 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 16:59:26 crc kubenswrapper[4856]: E0126 16:59:26.202666 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12e50462-28e6-4531-ada4-e652310e6cce-metrics-certs podName:12e50462-28e6-4531-ada4-e652310e6cce nodeName:}" failed. No retries permitted until 2026-01-26 16:59:42.202611998 +0000 UTC m=+78.155865979 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/12e50462-28e6-4531-ada4-e652310e6cce-metrics-certs") pod "network-metrics-daemon-295wr" (UID: "12e50462-28e6-4531-ada4-e652310e6cce") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.285213 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.285260 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.285271 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.285288 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.285298 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:26Z","lastTransitionTime":"2026-01-26T16:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.387968 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.388352 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.388364 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.388407 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.388425 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:26Z","lastTransitionTime":"2026-01-26T16:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.394313 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.394373 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.394313 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 16:59:26 crc kubenswrapper[4856]: E0126 16:59:26.394422 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:59:26 crc kubenswrapper[4856]: E0126 16:59:26.394495 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.394548 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:59:26 crc kubenswrapper[4856]: E0126 16:59:26.394577 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 16:59:26 crc kubenswrapper[4856]: E0126 16:59:26.394641 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.478283 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 15:41:02.743482816 +0000 UTC Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.491774 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.491815 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.491824 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.491839 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.492920 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:26Z","lastTransitionTime":"2026-01-26T16:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.596283 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.596337 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.596349 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.596369 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.596382 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:26Z","lastTransitionTime":"2026-01-26T16:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.606688 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.606902 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.606949 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.606976 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:59:26 crc kubenswrapper[4856]: E0126 16:59:26.607009 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 16:59:58.606977354 +0000 UTC m=+94.560231335 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.607085 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:59:26 crc kubenswrapper[4856]: E0126 16:59:26.607125 4856 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 16:59:26 crc kubenswrapper[4856]: E0126 16:59:26.607184 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 16:59:26 crc kubenswrapper[4856]: E0126 16:59:26.607222 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 16:59:26 crc kubenswrapper[4856]: E0126 16:59:26.607239 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 16:59:58.60721099 +0000 UTC m=+94.560464981 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 16:59:26 crc kubenswrapper[4856]: E0126 16:59:26.607247 4856 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:59:26 crc kubenswrapper[4856]: E0126 16:59:26.607266 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 16:59:26 crc kubenswrapper[4856]: E0126 16:59:26.607283 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 16:59:26 crc kubenswrapper[4856]: E0126 16:59:26.607294 4856 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:59:26 crc kubenswrapper[4856]: E0126 16:59:26.607298 4856 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 16:59:26 crc kubenswrapper[4856]: E0126 16:59:26.607313 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 16:59:58.607299973 +0000 UTC m=+94.560554164 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:59:26 crc kubenswrapper[4856]: E0126 16:59:26.607331 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 16:59:58.607324094 +0000 UTC m=+94.560578075 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:59:26 crc kubenswrapper[4856]: E0126 16:59:26.607371 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 16:59:58.607350934 +0000 UTC m=+94.560604905 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.756795 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.756829 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.756838 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.756875 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.756884 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:26Z","lastTransitionTime":"2026-01-26T16:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.859487 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.859549 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.859559 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.859578 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.859590 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:26Z","lastTransitionTime":"2026-01-26T16:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.962315 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.962364 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.962376 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.962398 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:26 crc kubenswrapper[4856]: I0126 16:59:26.962410 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:26Z","lastTransitionTime":"2026-01-26T16:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.064929 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.064970 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.064980 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.064999 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.065011 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:27Z","lastTransitionTime":"2026-01-26T16:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.174243 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.174307 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.174319 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.174339 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.174359 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:27Z","lastTransitionTime":"2026-01-26T16:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.277208 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.277257 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.277283 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.277309 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.277324 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:27Z","lastTransitionTime":"2026-01-26T16:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.378992 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.379063 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.379086 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.379116 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.379140 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:27Z","lastTransitionTime":"2026-01-26T16:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.479290 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 19:08:33.920138624 +0000 UTC Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.481169 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.481215 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.481230 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.481251 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.481273 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:27Z","lastTransitionTime":"2026-01-26T16:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.583922 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.583986 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.584009 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.584032 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.584047 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:27Z","lastTransitionTime":"2026-01-26T16:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.686116 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.686144 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.686154 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.686167 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.686176 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:27Z","lastTransitionTime":"2026-01-26T16:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.788192 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.788226 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.788235 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.788249 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.788259 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:27Z","lastTransitionTime":"2026-01-26T16:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.890777 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.890825 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.890834 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.890851 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.890861 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:27Z","lastTransitionTime":"2026-01-26T16:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.993676 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.993729 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.993747 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.993763 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:27 crc kubenswrapper[4856]: I0126 16:59:27.993773 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:27Z","lastTransitionTime":"2026-01-26T16:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.097653 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.097698 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.097709 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.097725 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.097736 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:28Z","lastTransitionTime":"2026-01-26T16:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.200676 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.200713 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.200722 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.200736 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.200746 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:28Z","lastTransitionTime":"2026-01-26T16:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.303629 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.303690 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.303707 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.303735 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.303753 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:28Z","lastTransitionTime":"2026-01-26T16:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.395004 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.395038 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.395106 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.395062 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:59:28 crc kubenswrapper[4856]: E0126 16:59:28.395190 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 16:59:28 crc kubenswrapper[4856]: E0126 16:59:28.395264 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:59:28 crc kubenswrapper[4856]: E0126 16:59:28.395370 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:59:28 crc kubenswrapper[4856]: E0126 16:59:28.395804 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.396437 4856 scope.go:117] "RemoveContainer" containerID="7c166114ee4e41f0a7e4b0590da090e98c319ef6eda0b9611419dfc55ceb139c" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.406676 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.406712 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.406723 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.406740 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.406751 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:28Z","lastTransitionTime":"2026-01-26T16:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.479541 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 20:08:43.544369216 +0000 UTC Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.508626 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.508674 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.508683 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.508702 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.508715 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:28Z","lastTransitionTime":"2026-01-26T16:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.611152 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.611192 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.611205 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.611224 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.611241 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:28Z","lastTransitionTime":"2026-01-26T16:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.714379 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.714420 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.714431 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.714448 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.714463 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:28Z","lastTransitionTime":"2026-01-26T16:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.816436 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.816499 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.816513 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.816556 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.816575 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:28Z","lastTransitionTime":"2026-01-26T16:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.858041 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.858088 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.858101 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.858118 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.858130 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:28Z","lastTransitionTime":"2026-01-26T16:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:28 crc kubenswrapper[4856]: E0126 16:59:28.871657 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17523591-a778-4a97-aeab-8a7a93101850\\\",\\\"systemUUID\\\":\\\"ca45d056-99cb-4442-8a44-7e899628ecb2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:28Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.875697 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.875752 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.875768 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.875787 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.875798 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:28Z","lastTransitionTime":"2026-01-26T16:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:28 crc kubenswrapper[4856]: E0126 16:59:28.887860 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17523591-a778-4a97-aeab-8a7a93101850\\\",\\\"systemUUID\\\":\\\"ca45d056-99cb-4442-8a44-7e899628ecb2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:28Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.891093 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.891118 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.891140 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.891154 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.891164 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:28Z","lastTransitionTime":"2026-01-26T16:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:28 crc kubenswrapper[4856]: E0126 16:59:28.908003 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17523591-a778-4a97-aeab-8a7a93101850\\\",\\\"systemUUID\\\":\\\"ca45d056-99cb-4442-8a44-7e899628ecb2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:28Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.911604 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.911638 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.911649 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.911667 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.911679 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:28Z","lastTransitionTime":"2026-01-26T16:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:28 crc kubenswrapper[4856]: E0126 16:59:28.927408 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17523591-a778-4a97-aeab-8a7a93101850\\\",\\\"systemUUID\\\":\\\"ca45d056-99cb-4442-8a44-7e899628ecb2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:28Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.930714 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.930735 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.930743 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.930756 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:28 crc kubenswrapper[4856]: I0126 16:59:28.930765 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:28Z","lastTransitionTime":"2026-01-26T16:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:28 crc kubenswrapper[4856]: E0126 16:59:28.943258 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17523591-a778-4a97-aeab-8a7a93101850\\\",\\\"systemUUID\\\":\\\"ca45d056-99cb-4442-8a44-7e899628ecb2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:28Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:28 crc kubenswrapper[4856]: E0126 16:59:28.943449 4856 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.039401 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.039458 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.039469 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.039485 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.039496 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:29Z","lastTransitionTime":"2026-01-26T16:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.141798 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.141840 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.141851 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.141867 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.141881 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:29Z","lastTransitionTime":"2026-01-26T16:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.244182 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.244221 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.244232 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.244250 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.244261 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:29Z","lastTransitionTime":"2026-01-26T16:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.304812 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pxh94_ab5b6f50-172b-4535-a0f9-5d103bcab4e7/ovnkube-controller/1.log" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.307192 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" event={"ID":"ab5b6f50-172b-4535-a0f9-5d103bcab4e7","Type":"ContainerStarted","Data":"71cb66e6f52823e7aa63f88ba1d153fde73816120aa75d4a6b910937303d2b9e"} Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.307642 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.346431 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.346682 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.346791 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.346912 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.347022 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:29Z","lastTransitionTime":"2026-01-26T16:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.356155 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-295wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e50462-28e6-4531-ada4-e652310e6cce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-295wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:29Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.373361 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:29Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.389690 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad7b59f9-beb7-49d6-a2d1-e29133e46854\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc53a6a6d4222ee5e1ac29fd5957d8cdf3fd42de72c68c85329374ce7afc4004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v2l7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:29Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.416748 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71cb66e6f52823e7aa63f88ba1d153fde73816120aa75d4a6b910937303d2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c166114ee4e41f0a7e4b0590da090e98c319ef6eda0b9611419dfc55ceb139c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:59:12Z\\\",\\\"message\\\":\\\"etworkPolicy event handler 4 for removal\\\\nI0126 16:59:12.299079 6308 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 16:59:12.299145 6308 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 16:59:12.299187 6308 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 16:59:12.299270 6308 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 16:59:12.299298 6308 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 16:59:12.299298 6308 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 16:59:12.299306 6308 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 16:59:12.299340 6308 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 16:59:12.299299 6308 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 16:59:12.299385 6308 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 16:59:12.312081 6308 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 16:59:12.312141 6308 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 16:59:12.312158 6308 factory.go:656] Stopping watch factory\\\\nI0126 16:59:12.312172 6308 ovnkube.go:599] Stopped ovnkube\\\\nI0126 16:59:12.312201 6308 handler.go:208] Removed *v1.Node event handler 2\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:29Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.430870 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc864b0d-83bc-4954-9c61-ad650157caff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbde934a6c8acad10ca3ab8206d0ddbd4f7b17e9d304b898a68f4d3b0303bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b48f763d4aff37169399be766d5ab4f7ebbf91f304d139c9022a8556946eb107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0094e662f53c4832a984e05a880021af05ffc4c27f25394c28a070d9ef5490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb95f623fa4f5f42217649ccce4225f9fe588bb3558da9334eef2b40ddb62486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb95f623fa4f5f42217649ccce4225f9fe588bb3558da9334eef2b40ddb62486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:29Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.444814 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63c75ede-5170-4db0-811b-5217ef8d72b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da26ddcfc6ccde3c9aabd63bdef3435f9a9eaab8c095bfeef5670c1295576cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xm9cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:29Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.452166 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.452202 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.452213 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.452230 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.452240 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:29Z","lastTransitionTime":"2026-01-26T16:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.460499 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:29Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.488927 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77e85f9-b566-4807-bb92-55963c97b93c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba87c9fc35c230bbee201a5176cb467309f0b9aee82dfc81f3b677a15486d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03dc794e9c2035f2e1983eacad3e51d76223cb1b82e2f402c73f9453e4bd2f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-v7579\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:29Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.504412 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af4930ff9cebd62106545fb3da2dfc93f7b591426fe4c85aa2e637b60c935f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:29Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.519454 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c09748c0963bdb50c9552f249d1135ea53e68c52babd788dad050951cb849cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:29Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.527652 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 02:57:30.885241647 +0000 UTC Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.537433 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c823748751e9938f10fab08e33b5fffff5a6d15961ce028204131b7b69a56c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:29Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.555235 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.555275 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.555288 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.555305 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.555315 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:29Z","lastTransitionTime":"2026-01-26T16:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.558147 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:29Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.593336 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:29Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.614746 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq622" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a742e7b-c420-46e3-9e96-e9c744af6124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8plh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq622\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:29Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.639028 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59ecd87a-c5db-446d-ad3e-cfabbd648c1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89975b8f9428f81ab5d3fb48ced5dd9c837bea2feea3b89f5f7ff8d7d5d15b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40d432159a07ba20bf95f058bf8a597d67cbd8d852519bed035694f8ba3d8ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ba0f131eee14df389391e46bc772894e49e976c3663df5fb3bc98b3cb4a3d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03a01b651d9f66da4dd1f6e0d29ad97c0d6ae46b644c3d997d8dc99476706df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f07438e20bdf71c752bb661084c835341999c561bb5442d75e177223881276f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T16:58:51Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 16:58:43.431198 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 16:58:43.432227 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-99450334/tls.crt::/tmp/serving-cert-99450334/tls.key\\\\\\\"\\\\nI0126 16:58:50.828320 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 16:58:50.830449 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 16:58:50.830472 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 16:58:50.830509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 16:58:50.830515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 16:58:50.834885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 16:58:50.834958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 16:58:50.834991 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 16:58:50.834905 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 16:58:50.835015 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 16:58:50.835084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 16:58:50.835106 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 16:58:50.835116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 16:58:50.837641 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2daa3f13d37b7ae19dfca406b6b5cfdcd4f211287f7d410193f2fee36a24553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:29Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.655952 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191ad5d41024c88c1a4fbc30f307dd6340fd55b93b00d22f62209d0e82be286f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf2a4d8be409a46ffe5702797e56a80023a72a19fb7dc5c49e5f4984cedc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:29Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.658577 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.658629 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.658644 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.658668 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.658683 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:29Z","lastTransitionTime":"2026-01-26T16:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.761708 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.761767 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.761778 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.761803 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.761817 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:29Z","lastTransitionTime":"2026-01-26T16:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.864577 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.864624 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.864634 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.864651 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.864662 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:29Z","lastTransitionTime":"2026-01-26T16:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.967312 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.967658 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.967749 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.967835 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:29 crc kubenswrapper[4856]: I0126 16:59:29.967903 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:29Z","lastTransitionTime":"2026-01-26T16:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.071690 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.071742 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.071755 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.071773 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.071786 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:30Z","lastTransitionTime":"2026-01-26T16:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.175121 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.175164 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.175175 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.175190 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.175201 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:30Z","lastTransitionTime":"2026-01-26T16:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.511966 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:59:30 crc kubenswrapper[4856]: E0126 16:59:30.512098 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.512169 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.512244 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.512271 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:59:30 crc kubenswrapper[4856]: E0126 16:59:30.512328 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 16:59:30 crc kubenswrapper[4856]: E0126 16:59:30.512777 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:59:30 crc kubenswrapper[4856]: E0126 16:59:30.512823 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.516927 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.516954 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.516963 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.516978 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.516988 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:30Z","lastTransitionTime":"2026-01-26T16:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.519202 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pxh94_ab5b6f50-172b-4535-a0f9-5d103bcab4e7/ovnkube-controller/2.log" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.520269 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pxh94_ab5b6f50-172b-4535-a0f9-5d103bcab4e7/ovnkube-controller/1.log" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.522220 4856 generic.go:334] "Generic (PLEG): container finished" podID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerID="71cb66e6f52823e7aa63f88ba1d153fde73816120aa75d4a6b910937303d2b9e" exitCode=1 Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.522254 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" event={"ID":"ab5b6f50-172b-4535-a0f9-5d103bcab4e7","Type":"ContainerDied","Data":"71cb66e6f52823e7aa63f88ba1d153fde73816120aa75d4a6b910937303d2b9e"} Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.522288 4856 scope.go:117] "RemoveContainer" containerID="7c166114ee4e41f0a7e4b0590da090e98c319ef6eda0b9611419dfc55ceb139c" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.522867 4856 scope.go:117] "RemoveContainer" containerID="71cb66e6f52823e7aa63f88ba1d153fde73816120aa75d4a6b910937303d2b9e" Jan 26 16:59:30 crc kubenswrapper[4856]: E0126 16:59:30.522999 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pxh94_openshift-ovn-kubernetes(ab5b6f50-172b-4535-a0f9-5d103bcab4e7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.528159 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 06:33:25.34641691 +0000 UTC Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.540327 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq622" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a742e7b-c420-46e3-9e96-e9c744af6124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8plh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq622\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:30Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.557590 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59ecd87a-c5db-446d-ad3e-cfabbd648c1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89975b8f9428f81ab5d3fb48ced5dd9c837bea2feea3b89f5f7ff8d7d5d15b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40d432159a07ba20bf95f058bf8a597d67cbd8d852519bed035694f8ba3d8ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ba0f131eee14df389391e46bc772894e49e976c3663df5fb3bc98b3cb4a3d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03a01b651d9f66da4dd1f6e0d29ad97c0d6ae46b644c3d997d8dc99476706df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f07438e20bdf71c752bb661084c835341999c561bb5442d75e177223881276f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T16:58:51Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 16:58:43.431198 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 16:58:43.432227 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-99450334/tls.crt::/tmp/serving-cert-99450334/tls.key\\\\\\\"\\\\nI0126 16:58:50.828320 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 16:58:50.830449 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 16:58:50.830472 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 16:58:50.830509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 16:58:50.830515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 16:58:50.834885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 16:58:50.834958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 16:58:50.834991 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 16:58:50.834905 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 16:58:50.835015 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 16:58:50.835084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 16:58:50.835106 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 16:58:50.835116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 16:58:50.837641 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2daa3f13d37b7ae19dfca406b6b5cfdcd4f211287f7d410193f2fee36a24553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:30Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.572871 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191ad5d41024c88c1a4fbc30f307dd6340fd55b93b00d22f62209d0e82be286f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf2a4d8be409a46ffe5702797e56a80023a72a19fb7dc5c49e5f4984cedc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:30Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.586949 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c823748751e9938f10fab08e33b5fffff5a6d15961ce028204131b7b69a56c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:30Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.602838 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:30Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.614059 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:30Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.619977 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.620015 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.620026 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.620041 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.620056 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:30Z","lastTransitionTime":"2026-01-26T16:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.631238 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:30Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.650402 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad7b59f9-beb7-49d6-a2d1-e29133e46854\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc53a6a6d4222ee5e1ac29fd5957d8cdf3fd42de72c68c85329374ce7afc4004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v2l7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:30Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.674399 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71cb66e6f52823e7aa63f88ba1d153fde73816120aa75d4a6b910937303d2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c166114ee4e41f0a7e4b0590da090e98c319ef6eda0b9611419dfc55ceb139c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:59:12Z\\\",\\\"message\\\":\\\"etworkPolicy event handler 4 for removal\\\\nI0126 16:59:12.299079 6308 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 16:59:12.299145 6308 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 16:59:12.299187 6308 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 16:59:12.299270 6308 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 16:59:12.299298 6308 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 16:59:12.299298 6308 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 16:59:12.299306 6308 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 16:59:12.299340 6308 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 16:59:12.299299 6308 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 16:59:12.299385 6308 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 16:59:12.312081 6308 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 16:59:12.312141 6308 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 16:59:12.312158 6308 factory.go:656] Stopping watch factory\\\\nI0126 16:59:12.312172 6308 ovnkube.go:599] Stopped ovnkube\\\\nI0126 16:59:12.312201 6308 handler.go:208] Removed *v1.Node event handler 2\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71cb66e6f52823e7aa63f88ba1d153fde73816120aa75d4a6b910937303d2b9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:59:29Z\\\",\\\"message\\\":\\\"n-kubernetes/ovnkube-node-pxh94\\\\nI0126 16:59:29.834990 6550 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-t4fq2\\\\nI0126 16:59:29.834996 6550 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-t4fq2\\\\nI0126 16:59:29.835005 6550 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-t4fq2 in node crc\\\\nI0126 16:59:29.835026 6550 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 16:59:29.835056 6550 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0126 16:59:29.835079 6550 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0126 16:59:29.835090 6550 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0126 16:59:29.835097 6550 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:30Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.689293 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-295wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e50462-28e6-4531-ada4-e652310e6cce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-295wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:30Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.704582 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc864b0d-83bc-4954-9c61-ad650157caff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbde934a6c8acad10ca3ab8206d0ddbd4f7b17e9d304b898a68f4d3b0303bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b48f763d4aff37169399be766d5ab4f7ebbf91f304d139c9022a8556946eb107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0094e662f53c4832a984e05a880021af05ffc4c27f25394c28a070d9ef5490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb95f623fa4f5f42217649ccce4225f9fe588bb3558da9334eef2b40ddb62486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb95f623fa4f5f42217649ccce4225f9fe588bb3558da9334eef2b40ddb62486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:30Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.718734 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63c75ede-5170-4db0-811b-5217ef8d72b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da26ddcfc6ccde3c9aabd63bdef3435f9a9eaab8c095bfeef5670c1295576cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xm9cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:30Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.722560 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.722598 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.722608 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.722624 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.722634 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:30Z","lastTransitionTime":"2026-01-26T16:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.732746 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af4930ff9cebd62106545fb3da2dfc93f7b591426fe4c85aa2e637b60c935f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:30Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.750450 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c09748c0963bdb50c9552f249d1135ea53e68c52babd788dad050951cb849cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:30Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.764769 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:30Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.777515 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77e85f9-b566-4807-bb92-55963c97b93c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba87c9fc35c230bbee201a5176cb467309f0b9aee82dfc81f3b677a15486d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03dc794e9c2035f2e1983eacad3e51d76223cb1b82e2f402c73f9453e4bd2f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-v7579\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:30Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.824464 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.824488 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.824495 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.824510 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.824518 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:30Z","lastTransitionTime":"2026-01-26T16:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.927394 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.927424 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.927433 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.927446 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:30 crc kubenswrapper[4856]: I0126 16:59:30.927455 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:30Z","lastTransitionTime":"2026-01-26T16:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.029635 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.029694 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.029710 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.029731 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.029745 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:31Z","lastTransitionTime":"2026-01-26T16:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.132597 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.132645 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.132655 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.132671 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.132682 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:31Z","lastTransitionTime":"2026-01-26T16:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.235318 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.235473 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.235516 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.235575 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.235591 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:31Z","lastTransitionTime":"2026-01-26T16:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.338218 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.338252 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.338259 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.338272 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.338281 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:31Z","lastTransitionTime":"2026-01-26T16:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.440950 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.440999 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.441011 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.441029 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.441040 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:31Z","lastTransitionTime":"2026-01-26T16:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.526708 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pxh94_ab5b6f50-172b-4535-a0f9-5d103bcab4e7/ovnkube-controller/2.log" Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.528261 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 09:57:18.474251678 +0000 UTC Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.530106 4856 scope.go:117] "RemoveContainer" containerID="71cb66e6f52823e7aa63f88ba1d153fde73816120aa75d4a6b910937303d2b9e" Jan 26 16:59:31 crc kubenswrapper[4856]: E0126 16:59:31.530328 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pxh94_openshift-ovn-kubernetes(ab5b6f50-172b-4535-a0f9-5d103bcab4e7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.542753 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.542793 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.542805 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.542826 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.542837 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:31Z","lastTransitionTime":"2026-01-26T16:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.545766 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:31Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.561827 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad7b59f9-beb7-49d6-a2d1-e29133e46854\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc53a6a6d4222ee5e1ac29fd5957d8cdf3fd42de72c68c85329374ce7afc4004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v2l7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:31Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.924702 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.924732 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.924742 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.924755 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.924764 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:31Z","lastTransitionTime":"2026-01-26T16:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.955745 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71cb66e6f52823e7aa63f88ba1d153fde73816120aa75d4a6b910937303d2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71cb66e6f52823e7aa63f88ba1d153fde73816120aa75d4a6b910937303d2b9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:59:29Z\\\",\\\"message\\\":\\\"n-kubernetes/ovnkube-node-pxh94\\\\nI0126 16:59:29.834990 6550 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-t4fq2\\\\nI0126 16:59:29.834996 6550 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-t4fq2\\\\nI0126 16:59:29.835005 6550 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-t4fq2 in node crc\\\\nI0126 16:59:29.835026 6550 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 16:59:29.835056 6550 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0126 16:59:29.835079 6550 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0126 16:59:29.835090 6550 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0126 16:59:29.835097 6550 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pxh94_openshift-ovn-kubernetes(ab5b6f50-172b-4535-a0f9-5d103bcab4e7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:31Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.968813 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-295wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e50462-28e6-4531-ada4-e652310e6cce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-295wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:31Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.984725 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc864b0d-83bc-4954-9c61-ad650157caff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbde934a6c8acad10ca3ab8206d0ddbd4f7b17e9d304b898a68f4d3b0303bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b48f763d4aff37169399be766d5ab4f7ebbf91f304d139c9022a8556946eb107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0094e662f53c4832a984e05a880021af05ffc4c27f25394c28a070d9ef5490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb95f623fa4f5f42217649ccce4225f9fe588bb3558da9334eef2b40ddb62486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb95f623fa4f5f42217649ccce4225f9fe588bb3558da9334eef2b40ddb62486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:31Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:31 crc kubenswrapper[4856]: I0126 16:59:31.995794 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63c75ede-5170-4db0-811b-5217ef8d72b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da26ddcfc6ccde3c9aabd63bdef3435f9a9eaab8c095bfeef5670c1295576cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xm9cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:31Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.006174 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af4930ff9cebd62106545fb3da2dfc93f7b591426fe4c85aa2e637b60c935f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:32Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.018911 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c09748c0963bdb50c9552f249d1135ea53e68c52babd788dad050951cb849cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:32Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.026645 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.026680 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.026691 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.026709 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.026722 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:32Z","lastTransitionTime":"2026-01-26T16:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.033665 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:32Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.045201 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77e85f9-b566-4807-bb92-55963c97b93c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba87c9fc35c230bbee201a5176cb467309f0b9aee82dfc81f3b677a15486d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03dc794e9c2035f2e1983eacad3e51d76223cb1b82e2f402c73f9453e4bd2f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-v7579\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:32Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.060010 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59ecd87a-c5db-446d-ad3e-cfabbd648c1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89975b8f9428f81ab5d3fb48ced5dd9c837bea2feea3b89f5f7ff8d7d5d15b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40d432159a07ba20bf95f058bf8a597d67cbd8d852519bed035694f8ba3d8ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ba0f131eee14df389391e46bc772894e49e976c3663df5fb3bc98b3cb4a3d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03a01b651d9f66da4dd1f6e0d29ad97c0d6ae46b644c3d997d8dc99476706df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f07438e20bdf71c752bb661084c835341999c561bb5442d75e177223881276f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T16:58:51Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 16:58:43.431198 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 16:58:43.432227 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-99450334/tls.crt::/tmp/serving-cert-99450334/tls.key\\\\\\\"\\\\nI0126 16:58:50.828320 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 16:58:50.830449 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 16:58:50.830472 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 16:58:50.830509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 16:58:50.830515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 16:58:50.834885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 16:58:50.834958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 16:58:50.834991 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 16:58:50.834905 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 16:58:50.835015 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 16:58:50.835084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 16:58:50.835106 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 16:58:50.835116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 16:58:50.837641 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2daa3f13d37b7ae19dfca406b6b5cfdcd4f211287f7d410193f2fee36a24553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:32Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.075013 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191ad5d41024c88c1a4fbc30f307dd6340fd55b93b00d22f62209d0e82be286f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf2a4d8be409a46ffe5702797e56a80023a72a19fb7dc5c49e5f4984cedc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:32Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.089616 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c823748751e9938f10fab08e33b5fffff5a6d15961ce028204131b7b69a56c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:32Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.104920 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:32Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.118714 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:32Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.128924 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.128960 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.128969 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.128987 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.128998 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:32Z","lastTransitionTime":"2026-01-26T16:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.136177 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq622" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a742e7b-c420-46e3-9e96-e9c744af6124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8plh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq622\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:32Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.231488 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.231517 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.231542 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.231554 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.231564 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:32Z","lastTransitionTime":"2026-01-26T16:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.333732 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.333780 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.333791 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.333808 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.333819 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:32Z","lastTransitionTime":"2026-01-26T16:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.395062 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.395066 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.395117 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.395192 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:59:32 crc kubenswrapper[4856]: E0126 16:59:32.395339 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 16:59:32 crc kubenswrapper[4856]: E0126 16:59:32.395452 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:59:32 crc kubenswrapper[4856]: E0126 16:59:32.395590 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:59:32 crc kubenswrapper[4856]: E0126 16:59:32.395694 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.436422 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.436469 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.436484 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.436502 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.436517 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:32Z","lastTransitionTime":"2026-01-26T16:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.528383 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 14:12:46.832808478 +0000 UTC Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.538786 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.538814 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.538821 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.538834 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.538843 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:32Z","lastTransitionTime":"2026-01-26T16:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.645656 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.645716 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.645730 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.645755 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.645772 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:32Z","lastTransitionTime":"2026-01-26T16:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.748470 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.748503 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.748520 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.748559 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.748569 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:32Z","lastTransitionTime":"2026-01-26T16:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.851488 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.851551 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.851564 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.851581 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.851593 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:32Z","lastTransitionTime":"2026-01-26T16:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.954935 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.954969 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.954988 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.955005 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:32 crc kubenswrapper[4856]: I0126 16:59:32.955015 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:32Z","lastTransitionTime":"2026-01-26T16:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.057299 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.057328 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.057336 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.057361 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.057370 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:33Z","lastTransitionTime":"2026-01-26T16:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.159660 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.159702 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.159719 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.159736 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.159745 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:33Z","lastTransitionTime":"2026-01-26T16:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.262844 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.262872 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.262879 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.262895 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.262906 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:33Z","lastTransitionTime":"2026-01-26T16:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.365511 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.365595 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.365607 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.365624 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.365636 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:33Z","lastTransitionTime":"2026-01-26T16:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.468494 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.468557 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.468569 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.468587 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.468597 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:33Z","lastTransitionTime":"2026-01-26T16:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.529167 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 17:05:39.252262224 +0000 UTC Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.571610 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.571651 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.571662 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.571679 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.571695 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:33Z","lastTransitionTime":"2026-01-26T16:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.674812 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.674860 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.674871 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.674885 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.674894 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:33Z","lastTransitionTime":"2026-01-26T16:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.777012 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.777052 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.777069 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.777085 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.777095 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:33Z","lastTransitionTime":"2026-01-26T16:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.879101 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.879166 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.879179 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.879195 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.879209 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:33Z","lastTransitionTime":"2026-01-26T16:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.982003 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.982075 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.982088 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.982104 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:33 crc kubenswrapper[4856]: I0126 16:59:33.982116 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:33Z","lastTransitionTime":"2026-01-26T16:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.086031 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.086127 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.086155 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.086221 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.086247 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:34Z","lastTransitionTime":"2026-01-26T16:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.189226 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.189276 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.189289 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.189308 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.189319 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:34Z","lastTransitionTime":"2026-01-26T16:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.291901 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.291933 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.291942 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.291955 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.291964 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:34Z","lastTransitionTime":"2026-01-26T16:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.394169 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.394213 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.394240 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.394174 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:59:34 crc kubenswrapper[4856]: E0126 16:59:34.394285 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:59:34 crc kubenswrapper[4856]: E0126 16:59:34.394365 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.394439 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.394459 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.394468 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:34 crc kubenswrapper[4856]: E0126 16:59:34.394471 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.394485 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.394504 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:34Z","lastTransitionTime":"2026-01-26T16:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:34 crc kubenswrapper[4856]: E0126 16:59:34.394560 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.497661 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.497695 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.497707 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.497723 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.497736 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:34Z","lastTransitionTime":"2026-01-26T16:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.530243 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 06:22:24.079556356 +0000 UTC Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.600100 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.600189 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.600201 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.600239 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.600251 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:34Z","lastTransitionTime":"2026-01-26T16:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.702661 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.702714 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.702729 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.702749 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.702765 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:34Z","lastTransitionTime":"2026-01-26T16:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.804578 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.804621 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.804641 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.804659 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.804672 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:34Z","lastTransitionTime":"2026-01-26T16:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.907641 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.907676 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.907688 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.907706 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:34 crc kubenswrapper[4856]: I0126 16:59:34.907715 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:34Z","lastTransitionTime":"2026-01-26T16:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.010200 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.010272 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.010286 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.010326 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.010340 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:35Z","lastTransitionTime":"2026-01-26T16:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.113503 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.113578 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.113587 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.113604 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.113614 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:35Z","lastTransitionTime":"2026-01-26T16:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.216463 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.216589 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.216606 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.216665 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.216679 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:35Z","lastTransitionTime":"2026-01-26T16:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.319852 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.319904 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.319914 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.319929 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.319940 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:35Z","lastTransitionTime":"2026-01-26T16:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.413194 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:35Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.422379 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.422456 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.422469 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.422491 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.422504 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:35Z","lastTransitionTime":"2026-01-26T16:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.435977 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad7b59f9-beb7-49d6-a2d1-e29133e46854\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc53a6a6d4222ee5e1ac29fd5957d8cdf3fd42de72c68c85329374ce7afc4004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v2l7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:35Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.461125 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71cb66e6f52823e7aa63f88ba1d153fde73816120aa75d4a6b910937303d2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71cb66e6f52823e7aa63f88ba1d153fde73816120aa75d4a6b910937303d2b9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:59:29Z\\\",\\\"message\\\":\\\"n-kubernetes/ovnkube-node-pxh94\\\\nI0126 16:59:29.834990 6550 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-t4fq2\\\\nI0126 16:59:29.834996 6550 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-t4fq2\\\\nI0126 16:59:29.835005 6550 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-t4fq2 in node crc\\\\nI0126 16:59:29.835026 6550 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 16:59:29.835056 6550 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0126 16:59:29.835079 6550 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0126 16:59:29.835090 6550 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0126 16:59:29.835097 6550 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pxh94_openshift-ovn-kubernetes(ab5b6f50-172b-4535-a0f9-5d103bcab4e7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:35Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.479884 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-295wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e50462-28e6-4531-ada4-e652310e6cce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-295wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:35Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.496953 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc864b0d-83bc-4954-9c61-ad650157caff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbde934a6c8acad10ca3ab8206d0ddbd4f7b17e9d304b898a68f4d3b0303bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b48f763d4aff37169399be766d5ab4f7ebbf91f304d139c9022a8556946eb107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0094e662f53c4832a984e05a880021af05ffc4c27f25394c28a070d9ef5490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb95f623fa4f5f42217649ccce4225f9fe588bb3558da9334eef2b40ddb62486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb95f623fa4f5f42217649ccce4225f9fe588bb3558da9334eef2b40ddb62486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:35Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.510411 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63c75ede-5170-4db0-811b-5217ef8d72b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da26ddcfc6ccde3c9aabd63bdef3435f9a9eaab8c095bfeef5670c1295576cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xm9cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:35Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.520280 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af4930ff9cebd62106545fb3da2dfc93f7b591426fe4c85aa2e637b60c935f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:35Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.524823 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.524855 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.524865 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.524912 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.524923 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:35Z","lastTransitionTime":"2026-01-26T16:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.531112 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 16:59:59.499715133 +0000 UTC Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.721874 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.721939 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.721952 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.721968 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.722024 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:35Z","lastTransitionTime":"2026-01-26T16:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.724219 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c09748c0963bdb50c9552f249d1135ea53e68c52babd788dad050951cb849cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:35Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.744298 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:35Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.759807 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77e85f9-b566-4807-bb92-55963c97b93c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba87c9fc35c230bbee201a5176cb467309f0b9aee82dfc81f3b677a15486d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03dc794e9c2035f2e1983eacad3e51d76223cb1b82e2f402c73f9453e4bd2f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-v7579\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:35Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.774165 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:35Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.792520 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq622" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a742e7b-c420-46e3-9e96-e9c744af6124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8plh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq622\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:35Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.812460 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59ecd87a-c5db-446d-ad3e-cfabbd648c1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89975b8f9428f81ab5d3fb48ced5dd9c837bea2feea3b89f5f7ff8d7d5d15b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40d432159a07ba20bf95f058bf8a597d67cbd8d852519bed035694f8ba3d8ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ba0f131eee14df389391e46bc772894e49e976c3663df5fb3bc98b3cb4a3d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03a01b651d9f66da4dd1f6e0d29ad97c0d6ae46b644c3d997d8dc99476706df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f07438e20bdf71c752bb661084c835341999c561bb5442d75e177223881276f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T16:58:51Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 16:58:43.431198 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 16:58:43.432227 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-99450334/tls.crt::/tmp/serving-cert-99450334/tls.key\\\\\\\"\\\\nI0126 16:58:50.828320 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 16:58:50.830449 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 16:58:50.830472 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 16:58:50.830509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 16:58:50.830515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 16:58:50.834885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 16:58:50.834958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 16:58:50.834991 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 16:58:50.834905 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 16:58:50.835015 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 16:58:50.835084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 16:58:50.835106 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 16:58:50.835116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 16:58:50.837641 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2daa3f13d37b7ae19dfca406b6b5cfdcd4f211287f7d410193f2fee36a24553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:35Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.824023 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.824045 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.824054 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.824068 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.824079 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:35Z","lastTransitionTime":"2026-01-26T16:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.829836 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191ad5d41024c88c1a4fbc30f307dd6340fd55b93b00d22f62209d0e82be286f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf2a4d8be409a46ffe5702797e56a80023a72a19fb7dc5c49e5f4984cedc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:35Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.844280 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c823748751e9938f10fab08e33b5fffff5a6d15961ce028204131b7b69a56c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:35Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.858460 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:35Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.926627 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.926670 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.926679 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.926696 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:35 crc kubenswrapper[4856]: I0126 16:59:35.926706 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:35Z","lastTransitionTime":"2026-01-26T16:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.029983 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.030031 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.030041 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.030059 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.030070 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:36Z","lastTransitionTime":"2026-01-26T16:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.133142 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.133181 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.133189 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.133204 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.133213 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:36Z","lastTransitionTime":"2026-01-26T16:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.235491 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.235568 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.235584 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.235606 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.235620 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:36Z","lastTransitionTime":"2026-01-26T16:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.338059 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.338106 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.338113 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.338127 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.338136 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:36Z","lastTransitionTime":"2026-01-26T16:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.394557 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.394597 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.394564 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:59:36 crc kubenswrapper[4856]: E0126 16:59:36.394715 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.394564 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 16:59:36 crc kubenswrapper[4856]: E0126 16:59:36.394830 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:59:36 crc kubenswrapper[4856]: E0126 16:59:36.394927 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 16:59:36 crc kubenswrapper[4856]: E0126 16:59:36.394985 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.439867 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.439911 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.439922 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.439938 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.439949 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:36Z","lastTransitionTime":"2026-01-26T16:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.531768 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 09:43:06.140636051 +0000 UTC Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.542545 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.542572 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.542582 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.542595 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.542605 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:36Z","lastTransitionTime":"2026-01-26T16:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.645348 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.645391 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.645400 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.645415 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.645426 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:36Z","lastTransitionTime":"2026-01-26T16:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.747563 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.747605 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.747617 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.747635 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.747650 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:36Z","lastTransitionTime":"2026-01-26T16:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.849905 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.849998 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.850011 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.850034 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.850048 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:36Z","lastTransitionTime":"2026-01-26T16:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.952567 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.952919 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.952929 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.952944 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:36 crc kubenswrapper[4856]: I0126 16:59:36.952955 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:36Z","lastTransitionTime":"2026-01-26T16:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.055884 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.055924 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.055933 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.055948 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.055959 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:37Z","lastTransitionTime":"2026-01-26T16:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.158820 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.158861 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.158872 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.158891 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.158904 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:37Z","lastTransitionTime":"2026-01-26T16:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.262186 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.262278 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.262293 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.262319 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.262333 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:37Z","lastTransitionTime":"2026-01-26T16:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.365031 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.365078 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.365090 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.365109 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.365123 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:37Z","lastTransitionTime":"2026-01-26T16:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.475599 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.475643 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.475658 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.475739 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.475795 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:37Z","lastTransitionTime":"2026-01-26T16:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.531928 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 00:58:27.160662964 +0000 UTC Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.578987 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.579013 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.579022 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.579035 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.579044 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:37Z","lastTransitionTime":"2026-01-26T16:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.682050 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.682102 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.682113 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.682130 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.682141 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:37Z","lastTransitionTime":"2026-01-26T16:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.784431 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.784468 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.784479 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.784496 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.784509 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:37Z","lastTransitionTime":"2026-01-26T16:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.886719 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.886790 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.886803 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.886817 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.886827 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:37Z","lastTransitionTime":"2026-01-26T16:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.989709 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.989768 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.989789 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.989814 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:37 crc kubenswrapper[4856]: I0126 16:59:37.989830 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:37Z","lastTransitionTime":"2026-01-26T16:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.093043 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.093090 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.093103 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.093121 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.093132 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:38Z","lastTransitionTime":"2026-01-26T16:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.196406 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.196472 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.196494 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.196519 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.196589 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:38Z","lastTransitionTime":"2026-01-26T16:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.299859 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.299920 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.299932 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.299954 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.299968 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:38Z","lastTransitionTime":"2026-01-26T16:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.395223 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.395367 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.395409 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:59:38 crc kubenswrapper[4856]: E0126 16:59:38.395443 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.395363 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:59:38 crc kubenswrapper[4856]: E0126 16:59:38.395541 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 16:59:38 crc kubenswrapper[4856]: E0126 16:59:38.395636 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:59:38 crc kubenswrapper[4856]: E0126 16:59:38.395777 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.402673 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.402708 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.402720 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.402735 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.402745 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:38Z","lastTransitionTime":"2026-01-26T16:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.506355 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.506433 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.506443 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.506478 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.506490 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:38Z","lastTransitionTime":"2026-01-26T16:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.532744 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 20:10:57.690258276 +0000 UTC Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.608948 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.609000 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.609011 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.609032 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.609045 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:38Z","lastTransitionTime":"2026-01-26T16:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.712461 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.712540 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.712551 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.712587 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.712603 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:38Z","lastTransitionTime":"2026-01-26T16:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.815114 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.815182 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.815194 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.815214 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.815227 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:38Z","lastTransitionTime":"2026-01-26T16:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.923370 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.923428 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.923445 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.923465 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:38 crc kubenswrapper[4856]: I0126 16:59:38.923478 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:38Z","lastTransitionTime":"2026-01-26T16:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.027189 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.027270 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.027287 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.027310 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.027325 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:39Z","lastTransitionTime":"2026-01-26T16:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.098024 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.098090 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.098107 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.098131 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.098148 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:39Z","lastTransitionTime":"2026-01-26T16:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:39 crc kubenswrapper[4856]: E0126 16:59:39.118421 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17523591-a778-4a97-aeab-8a7a93101850\\\",\\\"systemUUID\\\":\\\"ca45d056-99cb-4442-8a44-7e899628ecb2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:39Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.122854 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.122898 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.122909 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.122926 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.122937 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:39Z","lastTransitionTime":"2026-01-26T16:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:39 crc kubenswrapper[4856]: E0126 16:59:39.136640 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17523591-a778-4a97-aeab-8a7a93101850\\\",\\\"systemUUID\\\":\\\"ca45d056-99cb-4442-8a44-7e899628ecb2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:39Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.140911 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.140944 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.140952 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.140964 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.140973 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:39Z","lastTransitionTime":"2026-01-26T16:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:39 crc kubenswrapper[4856]: E0126 16:59:39.165702 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17523591-a778-4a97-aeab-8a7a93101850\\\",\\\"systemUUID\\\":\\\"ca45d056-99cb-4442-8a44-7e899628ecb2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:39Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.170074 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.170107 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.170115 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.170128 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.170138 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:39Z","lastTransitionTime":"2026-01-26T16:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:39 crc kubenswrapper[4856]: E0126 16:59:39.184439 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17523591-a778-4a97-aeab-8a7a93101850\\\",\\\"systemUUID\\\":\\\"ca45d056-99cb-4442-8a44-7e899628ecb2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:39Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.189301 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.189340 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.189350 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.189367 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.189377 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:39Z","lastTransitionTime":"2026-01-26T16:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:39 crc kubenswrapper[4856]: E0126 16:59:39.201001 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17523591-a778-4a97-aeab-8a7a93101850\\\",\\\"systemUUID\\\":\\\"ca45d056-99cb-4442-8a44-7e899628ecb2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:39Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:39 crc kubenswrapper[4856]: E0126 16:59:39.201145 4856 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.202565 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.202587 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.202594 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.202611 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.202622 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:39Z","lastTransitionTime":"2026-01-26T16:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.304870 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.304906 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.304914 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.304927 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.304937 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:39Z","lastTransitionTime":"2026-01-26T16:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.407774 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.407822 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.407839 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.407864 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.407881 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:39Z","lastTransitionTime":"2026-01-26T16:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.511054 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.511140 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.511158 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.511183 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.511202 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:39Z","lastTransitionTime":"2026-01-26T16:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.532869 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 11:40:46.268132806 +0000 UTC Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.614189 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.614246 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.614260 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.614281 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.614298 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:39Z","lastTransitionTime":"2026-01-26T16:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.717565 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.717614 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.717625 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.717646 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.717659 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:39Z","lastTransitionTime":"2026-01-26T16:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.820188 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.820417 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.820439 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.820466 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.820487 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:39Z","lastTransitionTime":"2026-01-26T16:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.925915 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.925968 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.925984 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.926007 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:39 crc kubenswrapper[4856]: I0126 16:59:39.926023 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:39Z","lastTransitionTime":"2026-01-26T16:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.028884 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.028947 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.028970 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.028999 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.029020 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:40Z","lastTransitionTime":"2026-01-26T16:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.132515 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.132576 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.132588 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.132605 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.132618 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:40Z","lastTransitionTime":"2026-01-26T16:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.236119 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.236189 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.236222 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.236251 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.236273 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:40Z","lastTransitionTime":"2026-01-26T16:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.339220 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.339287 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.339304 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.339331 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.339354 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:40Z","lastTransitionTime":"2026-01-26T16:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.395041 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.395143 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:59:40 crc kubenswrapper[4856]: E0126 16:59:40.395178 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.395429 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 16:59:40 crc kubenswrapper[4856]: E0126 16:59:40.395445 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.395564 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:59:40 crc kubenswrapper[4856]: E0126 16:59:40.395691 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 16:59:40 crc kubenswrapper[4856]: E0126 16:59:40.395950 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.441719 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.441755 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.441767 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.441785 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.441799 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:40Z","lastTransitionTime":"2026-01-26T16:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.533328 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 03:26:01.176271514 +0000 UTC Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.544155 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.544190 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.544202 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.544218 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.544229 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:40Z","lastTransitionTime":"2026-01-26T16:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.647308 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.647356 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.647371 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.647393 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.647410 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:40Z","lastTransitionTime":"2026-01-26T16:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.749092 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.749130 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.749161 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.749178 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.749189 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:40Z","lastTransitionTime":"2026-01-26T16:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.852644 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.852710 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.852746 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.852766 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.852778 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:40Z","lastTransitionTime":"2026-01-26T16:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.956058 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.956149 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.956185 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.956216 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:40 crc kubenswrapper[4856]: I0126 16:59:40.956237 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:40Z","lastTransitionTime":"2026-01-26T16:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.058950 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.058986 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.058997 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.059012 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.059023 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:41Z","lastTransitionTime":"2026-01-26T16:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.160596 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.160635 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.160644 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.160657 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.160666 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:41Z","lastTransitionTime":"2026-01-26T16:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.263667 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.263707 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.263753 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.263771 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.263783 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:41Z","lastTransitionTime":"2026-01-26T16:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.365873 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.365922 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.365936 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.365956 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.365970 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:41Z","lastTransitionTime":"2026-01-26T16:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.468813 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.468858 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.468875 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.468891 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.468902 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:41Z","lastTransitionTime":"2026-01-26T16:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.533993 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 23:13:54.894630923 +0000 UTC Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.571614 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.571666 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.571681 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.571702 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.571718 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:41Z","lastTransitionTime":"2026-01-26T16:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.674600 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.674660 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.674679 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.674703 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.674718 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:41Z","lastTransitionTime":"2026-01-26T16:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.777808 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.777858 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.777885 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.777908 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.777922 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:41Z","lastTransitionTime":"2026-01-26T16:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.881967 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.882018 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.882063 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.882098 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.882122 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:41Z","lastTransitionTime":"2026-01-26T16:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.987699 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.987756 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.987778 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.987812 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:41 crc kubenswrapper[4856]: I0126 16:59:41.987837 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:41Z","lastTransitionTime":"2026-01-26T16:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.091292 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.091361 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.091405 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.091437 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.091461 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:42Z","lastTransitionTime":"2026-01-26T16:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.197508 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.197600 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.197627 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.197653 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.197670 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:42Z","lastTransitionTime":"2026-01-26T16:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.285638 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12e50462-28e6-4531-ada4-e652310e6cce-metrics-certs\") pod \"network-metrics-daemon-295wr\" (UID: \"12e50462-28e6-4531-ada4-e652310e6cce\") " pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 16:59:42 crc kubenswrapper[4856]: E0126 16:59:42.285818 4856 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 16:59:42 crc kubenswrapper[4856]: E0126 16:59:42.285891 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12e50462-28e6-4531-ada4-e652310e6cce-metrics-certs podName:12e50462-28e6-4531-ada4-e652310e6cce nodeName:}" failed. No retries permitted until 2026-01-26 17:00:14.285864872 +0000 UTC m=+110.239118853 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/12e50462-28e6-4531-ada4-e652310e6cce-metrics-certs") pod "network-metrics-daemon-295wr" (UID: "12e50462-28e6-4531-ada4-e652310e6cce") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.300512 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.300593 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.300612 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.300635 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.300652 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:42Z","lastTransitionTime":"2026-01-26T16:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.394938 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.394989 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.394965 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.394948 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 16:59:42 crc kubenswrapper[4856]: E0126 16:59:42.395107 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:59:42 crc kubenswrapper[4856]: E0126 16:59:42.395270 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:59:42 crc kubenswrapper[4856]: E0126 16:59:42.395306 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 16:59:42 crc kubenswrapper[4856]: E0126 16:59:42.395347 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.403962 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.403994 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.404003 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.404018 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.404032 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:42Z","lastTransitionTime":"2026-01-26T16:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.507007 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.507042 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.507052 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.507068 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.507077 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:42Z","lastTransitionTime":"2026-01-26T16:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.534778 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 21:33:37.338164612 +0000 UTC Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.609821 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.609887 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.609905 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.609930 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.609949 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:42Z","lastTransitionTime":"2026-01-26T16:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.712839 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.712907 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.712930 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.712959 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.712980 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:42Z","lastTransitionTime":"2026-01-26T16:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.816126 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.816202 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.816221 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.816255 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.816290 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:42Z","lastTransitionTime":"2026-01-26T16:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.919670 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.919744 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.919765 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.919793 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:42 crc kubenswrapper[4856]: I0126 16:59:42.919819 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:42Z","lastTransitionTime":"2026-01-26T16:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.022823 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.022861 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.022873 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.022890 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.022903 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:43Z","lastTransitionTime":"2026-01-26T16:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.125408 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.125467 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.125484 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.125513 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.125556 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:43Z","lastTransitionTime":"2026-01-26T16:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.228360 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.228402 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.228418 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.228438 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.228453 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:43Z","lastTransitionTime":"2026-01-26T16:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.330855 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.330910 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.330921 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.330941 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.330959 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:43Z","lastTransitionTime":"2026-01-26T16:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.434139 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.434219 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.434237 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.434262 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.434280 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:43Z","lastTransitionTime":"2026-01-26T16:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.535120 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 01:49:19.238651524 +0000 UTC Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.537251 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.537331 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.537359 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.537392 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.537416 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:43Z","lastTransitionTime":"2026-01-26T16:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.641298 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.641371 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.641404 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.641435 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.641459 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:43Z","lastTransitionTime":"2026-01-26T16:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.744682 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.744736 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.744752 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.744782 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.744798 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:43Z","lastTransitionTime":"2026-01-26T16:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.848623 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.848695 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.848719 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.848749 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.848770 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:43Z","lastTransitionTime":"2026-01-26T16:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.951777 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.951859 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.951880 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.951908 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:43 crc kubenswrapper[4856]: I0126 16:59:43.951926 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:43Z","lastTransitionTime":"2026-01-26T16:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.055655 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.055703 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.055720 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.055743 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.055760 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:44Z","lastTransitionTime":"2026-01-26T16:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.158650 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.158713 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.158732 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.158758 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.158776 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:44Z","lastTransitionTime":"2026-01-26T16:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.262265 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.262372 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.263852 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.264010 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.264704 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:44Z","lastTransitionTime":"2026-01-26T16:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.367886 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.367914 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.367924 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.367937 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.367945 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:44Z","lastTransitionTime":"2026-01-26T16:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.394788 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:59:44 crc kubenswrapper[4856]: E0126 16:59:44.394950 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.395698 4856 scope.go:117] "RemoveContainer" containerID="71cb66e6f52823e7aa63f88ba1d153fde73816120aa75d4a6b910937303d2b9e" Jan 26 16:59:44 crc kubenswrapper[4856]: E0126 16:59:44.395826 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pxh94_openshift-ovn-kubernetes(ab5b6f50-172b-4535-a0f9-5d103bcab4e7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.395925 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:59:44 crc kubenswrapper[4856]: E0126 16:59:44.395970 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.396063 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:59:44 crc kubenswrapper[4856]: E0126 16:59:44.396103 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.396191 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 16:59:44 crc kubenswrapper[4856]: E0126 16:59:44.396238 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.470283 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.470338 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.470350 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.470366 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.470376 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:44Z","lastTransitionTime":"2026-01-26T16:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.536252 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 16:35:26.149214412 +0000 UTC Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.573948 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.574002 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.574017 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.574036 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.574049 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:44Z","lastTransitionTime":"2026-01-26T16:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.677036 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.677108 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.677133 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.677166 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.677189 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:44Z","lastTransitionTime":"2026-01-26T16:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.779592 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.779847 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.779885 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.779942 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.779967 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:44Z","lastTransitionTime":"2026-01-26T16:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.883602 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.883652 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.883669 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.883694 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.883711 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:44Z","lastTransitionTime":"2026-01-26T16:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.986425 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.986502 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.986521 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.986580 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:44 crc kubenswrapper[4856]: I0126 16:59:44.986598 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:44Z","lastTransitionTime":"2026-01-26T16:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.089715 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.089830 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.089850 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.089875 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.089893 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:45Z","lastTransitionTime":"2026-01-26T16:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.193600 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.193644 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.193656 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.193676 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.193690 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:45Z","lastTransitionTime":"2026-01-26T16:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.297573 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.297623 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.297634 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.297657 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.297669 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:45Z","lastTransitionTime":"2026-01-26T16:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.401170 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.401217 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.401229 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.401246 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.401257 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:45Z","lastTransitionTime":"2026-01-26T16:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.409199 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:45Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.410374 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.425953 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad7b59f9-beb7-49d6-a2d1-e29133e46854\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc53a6a6d4222ee5e1ac29fd5957d8cdf3fd42de72c68c85329374ce7afc4004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v2l7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:45Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.446909 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71cb66e6f52823e7aa63f88ba1d153fde73816120aa75d4a6b910937303d2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71cb66e6f52823e7aa63f88ba1d153fde73816120aa75d4a6b910937303d2b9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:59:29Z\\\",\\\"message\\\":\\\"n-kubernetes/ovnkube-node-pxh94\\\\nI0126 16:59:29.834990 6550 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-t4fq2\\\\nI0126 16:59:29.834996 6550 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-t4fq2\\\\nI0126 16:59:29.835005 6550 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-t4fq2 in node crc\\\\nI0126 16:59:29.835026 6550 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 16:59:29.835056 6550 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0126 16:59:29.835079 6550 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0126 16:59:29.835090 6550 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0126 16:59:29.835097 6550 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pxh94_openshift-ovn-kubernetes(ab5b6f50-172b-4535-a0f9-5d103bcab4e7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:45Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.461094 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-295wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e50462-28e6-4531-ada4-e652310e6cce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-295wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:45Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.476012 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc864b0d-83bc-4954-9c61-ad650157caff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbde934a6c8acad10ca3ab8206d0ddbd4f7b17e9d304b898a68f4d3b0303bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b48f763d4aff37169399be766d5ab4f7ebbf91f304d139c9022a8556946eb107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0094e662f53c4832a984e05a880021af05ffc4c27f25394c28a070d9ef5490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb95f623fa4f5f42217649ccce4225f9fe588bb3558da9334eef2b40ddb62486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb95f623fa4f5f42217649ccce4225f9fe588bb3558da9334eef2b40ddb62486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:45Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.487578 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63c75ede-5170-4db0-811b-5217ef8d72b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da26ddcfc6ccde3c9aabd63bdef3435f9a9eaab8c095bfeef5670c1295576cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xm9cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:45Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.500038 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77e85f9-b566-4807-bb92-55963c97b93c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba87c9fc35c230bbee201a5176cb467309f0b9aee82dfc81f3b677a15486d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03dc794e9c2035f2e1983eacad3e51d76223cb1b82e2f402c73f9453e4bd2f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-v7579\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:45Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.503739 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.503770 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.503779 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.503793 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.503803 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:45Z","lastTransitionTime":"2026-01-26T16:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.512050 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af4930ff9cebd62106545fb3da2dfc93f7b591426fe4c85aa2e637b60c935f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:45Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.524127 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c09748c0963bdb50c9552f249d1135ea53e68c52babd788dad050951cb849cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:45Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.536844 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 18:48:04.984716375 +0000 UTC Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.539308 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:45Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.551340 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:45Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.561623 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:45Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.576160 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq622" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a742e7b-c420-46e3-9e96-e9c744af6124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8plh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq622\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:45Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.589764 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59ecd87a-c5db-446d-ad3e-cfabbd648c1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89975b8f9428f81ab5d3fb48ced5dd9c837bea2feea3b89f5f7ff8d7d5d15b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40d432159a07ba20bf95f058bf8a597d67cbd8d852519bed035694f8ba3d8ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ba0f131eee14df389391e46bc772894e49e976c3663df5fb3bc98b3cb4a3d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03a01b651d9f66da4dd1f6e0d29ad97c0d6ae46b644c3d997d8dc99476706df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f07438e20bdf71c752bb661084c835341999c561bb5442d75e177223881276f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T16:58:51Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 16:58:43.431198 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 16:58:43.432227 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-99450334/tls.crt::/tmp/serving-cert-99450334/tls.key\\\\\\\"\\\\nI0126 16:58:50.828320 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 16:58:50.830449 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 16:58:50.830472 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 16:58:50.830509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 16:58:50.830515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 16:58:50.834885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 16:58:50.834958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 16:58:50.834991 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 16:58:50.834905 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 16:58:50.835015 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 16:58:50.835084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 16:58:50.835106 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 16:58:50.835116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 16:58:50.837641 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2daa3f13d37b7ae19dfca406b6b5cfdcd4f211287f7d410193f2fee36a24553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:45Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.601836 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191ad5d41024c88c1a4fbc30f307dd6340fd55b93b00d22f62209d0e82be286f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf2a4d8be409a46ffe5702797e56a80023a72a19fb7dc5c49e5f4984cedc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:45Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.606380 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.606410 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.606420 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.606435 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.606446 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:45Z","lastTransitionTime":"2026-01-26T16:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.626234 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c823748751e9938f10fab08e33b5fffff5a6d15961ce028204131b7b69a56c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:45Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.708503 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.708557 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.708569 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.708587 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.708602 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:45Z","lastTransitionTime":"2026-01-26T16:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.811443 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.811499 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.811513 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.811554 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.811571 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:45Z","lastTransitionTime":"2026-01-26T16:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.914398 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.914440 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.914449 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.914466 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:45 crc kubenswrapper[4856]: I0126 16:59:45.914478 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:45Z","lastTransitionTime":"2026-01-26T16:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.017762 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.017792 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.017801 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.017814 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.017823 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:46Z","lastTransitionTime":"2026-01-26T16:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.120154 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.120194 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.120205 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.120221 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.120232 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:46Z","lastTransitionTime":"2026-01-26T16:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.223305 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.223340 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.223352 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.223371 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.223383 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:46Z","lastTransitionTime":"2026-01-26T16:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.326750 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.326780 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.326791 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.326805 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.326814 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:46Z","lastTransitionTime":"2026-01-26T16:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.394792 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.394892 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:59:46 crc kubenswrapper[4856]: E0126 16:59:46.394969 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:59:46 crc kubenswrapper[4856]: E0126 16:59:46.395018 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.395079 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:59:46 crc kubenswrapper[4856]: E0126 16:59:46.395122 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.395142 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 16:59:46 crc kubenswrapper[4856]: E0126 16:59:46.395187 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.430122 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.430155 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.430163 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.430180 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.430188 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:46Z","lastTransitionTime":"2026-01-26T16:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.533924 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.534014 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.534040 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.534072 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.534097 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:46Z","lastTransitionTime":"2026-01-26T16:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.537607 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 02:34:08.921743991 +0000 UTC Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.637216 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.637268 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.637278 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.637294 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.637303 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:46Z","lastTransitionTime":"2026-01-26T16:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.740178 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.740246 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.740266 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.740295 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.740312 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:46Z","lastTransitionTime":"2026-01-26T16:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.803753 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rq622_7a742e7b-c420-46e3-9e96-e9c744af6124/kube-multus/0.log" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.803813 4856 generic.go:334] "Generic (PLEG): container finished" podID="7a742e7b-c420-46e3-9e96-e9c744af6124" containerID="ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191" exitCode=1 Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.803852 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rq622" event={"ID":"7a742e7b-c420-46e3-9e96-e9c744af6124","Type":"ContainerDied","Data":"ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191"} Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.804274 4856 scope.go:117] "RemoveContainer" containerID="ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.830273 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:46Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.843817 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.843870 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.843884 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.843906 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.843923 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:46Z","lastTransitionTime":"2026-01-26T16:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.851926 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad7b59f9-beb7-49d6-a2d1-e29133e46854\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc53a6a6d4222ee5e1ac29fd5957d8cdf3fd42de72c68c85329374ce7afc4004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v2l7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:46Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.889034 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71cb66e6f52823e7aa63f88ba1d153fde73816120aa75d4a6b910937303d2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71cb66e6f52823e7aa63f88ba1d153fde73816120aa75d4a6b910937303d2b9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:59:29Z\\\",\\\"message\\\":\\\"n-kubernetes/ovnkube-node-pxh94\\\\nI0126 16:59:29.834990 6550 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-t4fq2\\\\nI0126 16:59:29.834996 6550 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-t4fq2\\\\nI0126 16:59:29.835005 6550 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-t4fq2 in node crc\\\\nI0126 16:59:29.835026 6550 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 16:59:29.835056 6550 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0126 16:59:29.835079 6550 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0126 16:59:29.835090 6550 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0126 16:59:29.835097 6550 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pxh94_openshift-ovn-kubernetes(ab5b6f50-172b-4535-a0f9-5d103bcab4e7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:46Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.909881 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-295wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e50462-28e6-4531-ada4-e652310e6cce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-295wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:46Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.930977 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc864b0d-83bc-4954-9c61-ad650157caff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbde934a6c8acad10ca3ab8206d0ddbd4f7b17e9d304b898a68f4d3b0303bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b48f763d4aff37169399be766d5ab4f7ebbf91f304d139c9022a8556946eb107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0094e662f53c4832a984e05a880021af05ffc4c27f25394c28a070d9ef5490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb95f623fa4f5f42217649ccce4225f9fe588bb3558da9334eef2b40ddb62486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb95f623fa4f5f42217649ccce4225f9fe588bb3558da9334eef2b40ddb62486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:46Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.947052 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c3b0574-b4cc-483d-ae88-6517d1f30772\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9063a7c03990fc26fc47427f164a769fd649c2bdbd9d23ea7f646e569734be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67797295a8c3952902b2696c6fdb26b72ce1826b5ccd522a24aac90a0411b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67797295a8c3952902b2696c6fdb26b72ce1826b5ccd522a24aac90a0411b5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:46Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.950478 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.950511 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.950537 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.950557 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.950569 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:46Z","lastTransitionTime":"2026-01-26T16:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.965690 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63c75ede-5170-4db0-811b-5217ef8d72b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da26ddcfc6ccde3c9aabd63bdef3435f9a9eaab8c095bfeef5670c1295576cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xm9cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:46Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.983351 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77e85f9-b566-4807-bb92-55963c97b93c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba87c9fc35c230bbee201a5176cb467309f0b9aee82dfc81f3b677a15486d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03dc794e9c2035f2e1983eacad3e51d76223cb1b82e2f402c73f9453e4bd2f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-v7579\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:46Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:46 crc kubenswrapper[4856]: I0126 16:59:46.997983 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af4930ff9cebd62106545fb3da2dfc93f7b591426fe4c85aa2e637b60c935f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:46Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.016810 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c09748c0963bdb50c9552f249d1135ea53e68c52babd788dad050951cb849cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.031336 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.046932 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.052800 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.052840 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.052849 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.052863 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.052872 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:47Z","lastTransitionTime":"2026-01-26T16:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.062040 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.077063 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq622" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a742e7b-c420-46e3-9e96-e9c744af6124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:59:45Z\\\",\\\"message\\\":\\\"2026-01-26T16:59:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_41ab0694-d9c8-49a7-bf30-57e732ac7550\\\\n2026-01-26T16:59:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_41ab0694-d9c8-49a7-bf30-57e732ac7550 to /host/opt/cni/bin/\\\\n2026-01-26T16:59:00Z [verbose] multus-daemon started\\\\n2026-01-26T16:59:00Z [verbose] Readiness Indicator file check\\\\n2026-01-26T16:59:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8plh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq622\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.092589 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59ecd87a-c5db-446d-ad3e-cfabbd648c1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89975b8f9428f81ab5d3fb48ced5dd9c837bea2feea3b89f5f7ff8d7d5d15b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40d432159a07ba20bf95f058bf8a597d67cbd8d852519bed035694f8ba3d8ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ba0f131eee14df389391e46bc772894e49e976c3663df5fb3bc98b3cb4a3d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03a01b651d9f66da4dd1f6e0d29ad97c0d6ae46b644c3d997d8dc99476706df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f07438e20bdf71c752bb661084c835341999c561bb5442d75e177223881276f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T16:58:51Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 16:58:43.431198 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 16:58:43.432227 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-99450334/tls.crt::/tmp/serving-cert-99450334/tls.key\\\\\\\"\\\\nI0126 16:58:50.828320 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 16:58:50.830449 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 16:58:50.830472 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 16:58:50.830509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 16:58:50.830515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 16:58:50.834885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 16:58:50.834958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 16:58:50.834991 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 16:58:50.834905 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 16:58:50.835015 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 16:58:50.835084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 16:58:50.835106 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 16:58:50.835116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 16:58:50.837641 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2daa3f13d37b7ae19dfca406b6b5cfdcd4f211287f7d410193f2fee36a24553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.106157 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191ad5d41024c88c1a4fbc30f307dd6340fd55b93b00d22f62209d0e82be286f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf2a4d8be409a46ffe5702797e56a80023a72a19fb7dc5c49e5f4984cedc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.119922 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c823748751e9938f10fab08e33b5fffff5a6d15961ce028204131b7b69a56c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.155154 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.155190 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.155201 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.155218 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.155229 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:47Z","lastTransitionTime":"2026-01-26T16:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.258300 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.258572 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.258641 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.258708 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.258770 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:47Z","lastTransitionTime":"2026-01-26T16:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.362163 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.362209 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.362221 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.362238 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.362251 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:47Z","lastTransitionTime":"2026-01-26T16:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.465295 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.465361 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.465378 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.465401 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.465419 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:47Z","lastTransitionTime":"2026-01-26T16:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.538384 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 18:20:50.526085108 +0000 UTC Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.570169 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.570355 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.570456 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.570628 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.570765 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:47Z","lastTransitionTime":"2026-01-26T16:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.674571 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.674942 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.675099 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.675241 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.675378 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:47Z","lastTransitionTime":"2026-01-26T16:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.778176 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.778459 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.778586 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.778669 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.778734 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:47Z","lastTransitionTime":"2026-01-26T16:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.811148 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rq622_7a742e7b-c420-46e3-9e96-e9c744af6124/kube-multus/0.log" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.811429 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rq622" event={"ID":"7a742e7b-c420-46e3-9e96-e9c744af6124","Type":"ContainerStarted","Data":"afeb20035224feeab28a92ac77b43a24e653e49c56a25590a9861019a2b7a8ff"} Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.827078 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c3b0574-b4cc-483d-ae88-6517d1f30772\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9063a7c03990fc26fc47427f164a769fd649c2bdbd9d23ea7f646e569734be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67797295a8c3952902b2696c6fdb26b72ce1826b5ccd522a24aac90a0411b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67797295a8c3952902b2696c6fdb26b72ce1826b5ccd522a24aac90a0411b5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.840718 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63c75ede-5170-4db0-811b-5217ef8d72b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da26ddcfc6ccde3c9aabd63bdef3435f9a9eaab8c095bfeef5670c1295576cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xm9cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.856328 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc864b0d-83bc-4954-9c61-ad650157caff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbde934a6c8acad10ca3ab8206d0ddbd4f7b17e9d304b898a68f4d3b0303bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b48f763d4aff37169399be766d5ab4f7ebbf91f304d139c9022a8556946eb107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0094e662f53c4832a984e05a880021af05ffc4c27f25394c28a070d9ef5490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb95f623fa4f5f42217649ccce4225f9fe588bb3558da9334eef2b40ddb62486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb95f623fa4f5f42217649ccce4225f9fe588bb3558da9334eef2b40ddb62486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.874101 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af4930ff9cebd62106545fb3da2dfc93f7b591426fe4c85aa2e637b60c935f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.881754 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.881789 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.881799 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.881812 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.881821 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:47Z","lastTransitionTime":"2026-01-26T16:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.891371 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c09748c0963bdb50c9552f249d1135ea53e68c52babd788dad050951cb849cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.909929 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.929032 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77e85f9-b566-4807-bb92-55963c97b93c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba87c9fc35c230bbee201a5176cb467309f0b9aee82dfc81f3b677a15486d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03dc794e9c2035f2e1983eacad3e51d76223cb1b82e2f402c73f9453e4bd2f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-v7579\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.946342 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191ad5d41024c88c1a4fbc30f307dd6340fd55b93b00d22f62209d0e82be286f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf2a4d8be409a46ffe5702797e56a80023a72a19fb7dc5c49e5f4984cedc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.963866 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c823748751e9938f10fab08e33b5fffff5a6d15961ce028204131b7b69a56c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.978248 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.984670 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.984719 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.984737 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.984761 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.984778 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:47Z","lastTransitionTime":"2026-01-26T16:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:47 crc kubenswrapper[4856]: I0126 16:59:47.995424 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:47Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.015012 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq622" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a742e7b-c420-46e3-9e96-e9c744af6124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afeb20035224feeab28a92ac77b43a24e653e49c56a25590a9861019a2b7a8ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:59:45Z\\\",\\\"message\\\":\\\"2026-01-26T16:59:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_41ab0694-d9c8-49a7-bf30-57e732ac7550\\\\n2026-01-26T16:59:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_41ab0694-d9c8-49a7-bf30-57e732ac7550 to /host/opt/cni/bin/\\\\n2026-01-26T16:59:00Z [verbose] multus-daemon started\\\\n2026-01-26T16:59:00Z [verbose] Readiness Indicator file check\\\\n2026-01-26T16:59:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8plh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq622\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:48Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.038355 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59ecd87a-c5db-446d-ad3e-cfabbd648c1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89975b8f9428f81ab5d3fb48ced5dd9c837bea2feea3b89f5f7ff8d7d5d15b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40d432159a07ba20bf95f058bf8a597d67cbd8d852519bed035694f8ba3d8ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ba0f131eee14df389391e46bc772894e49e976c3663df5fb3bc98b3cb4a3d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03a01b651d9f66da4dd1f6e0d29ad97c0d6ae46b644c3d997d8dc99476706df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f07438e20bdf71c752bb661084c835341999c561bb5442d75e177223881276f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T16:58:51Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 16:58:43.431198 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 16:58:43.432227 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-99450334/tls.crt::/tmp/serving-cert-99450334/tls.key\\\\\\\"\\\\nI0126 16:58:50.828320 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 16:58:50.830449 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 16:58:50.830472 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 16:58:50.830509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 16:58:50.830515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 16:58:50.834885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 16:58:50.834958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 16:58:50.834991 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 16:58:50.834905 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 16:58:50.835015 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 16:58:50.835084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 16:58:50.835106 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 16:58:50.835116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 16:58:50.837641 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2daa3f13d37b7ae19dfca406b6b5cfdcd4f211287f7d410193f2fee36a24553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:48Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.060392 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad7b59f9-beb7-49d6-a2d1-e29133e46854\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc53a6a6d4222ee5e1ac29fd5957d8cdf3fd42de72c68c85329374ce7afc4004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v2l7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:48Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.087563 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.087641 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.087662 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.087694 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.087718 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:48Z","lastTransitionTime":"2026-01-26T16:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.096818 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71cb66e6f52823e7aa63f88ba1d153fde73816120aa75d4a6b910937303d2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71cb66e6f52823e7aa63f88ba1d153fde73816120aa75d4a6b910937303d2b9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:59:29Z\\\",\\\"message\\\":\\\"n-kubernetes/ovnkube-node-pxh94\\\\nI0126 16:59:29.834990 6550 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-t4fq2\\\\nI0126 16:59:29.834996 6550 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-t4fq2\\\\nI0126 16:59:29.835005 6550 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-t4fq2 in node crc\\\\nI0126 16:59:29.835026 6550 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 16:59:29.835056 6550 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0126 16:59:29.835079 6550 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0126 16:59:29.835090 6550 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0126 16:59:29.835097 6550 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pxh94_openshift-ovn-kubernetes(ab5b6f50-172b-4535-a0f9-5d103bcab4e7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:48Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.113312 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-295wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e50462-28e6-4531-ada4-e652310e6cce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-295wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:48Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.136934 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:48Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.190325 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.190360 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.190372 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.190388 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.190400 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:48Z","lastTransitionTime":"2026-01-26T16:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.293382 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.293490 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.293517 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.293915 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.293948 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:48Z","lastTransitionTime":"2026-01-26T16:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.395059 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.395125 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.395101 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.395325 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 16:59:48 crc kubenswrapper[4856]: E0126 16:59:48.395648 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:59:48 crc kubenswrapper[4856]: E0126 16:59:48.395766 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:59:48 crc kubenswrapper[4856]: E0126 16:59:48.395913 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:59:48 crc kubenswrapper[4856]: E0126 16:59:48.396059 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.396812 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.396901 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.396922 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.396948 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.396966 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:48Z","lastTransitionTime":"2026-01-26T16:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.499676 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.499730 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.499746 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.499769 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.499787 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:48Z","lastTransitionTime":"2026-01-26T16:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.539953 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 09:01:55.747805651 +0000 UTC Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.602948 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.603018 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.603039 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.603065 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.603088 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:48Z","lastTransitionTime":"2026-01-26T16:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.719500 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.719632 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.719650 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.719671 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.719685 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:48Z","lastTransitionTime":"2026-01-26T16:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.822317 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.822370 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.822383 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.822408 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.822424 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:48Z","lastTransitionTime":"2026-01-26T16:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.925613 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.925662 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.925677 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.925698 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:48 crc kubenswrapper[4856]: I0126 16:59:48.925728 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:48Z","lastTransitionTime":"2026-01-26T16:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.028163 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.028254 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.028288 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.028320 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.028344 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:49Z","lastTransitionTime":"2026-01-26T16:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.131290 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.131347 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.131358 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.131383 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.131395 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:49Z","lastTransitionTime":"2026-01-26T16:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.234172 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.234219 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.234231 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.234247 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.234260 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:49Z","lastTransitionTime":"2026-01-26T16:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.337935 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.337983 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.338000 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.338022 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.338037 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:49Z","lastTransitionTime":"2026-01-26T16:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.441326 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.441424 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.441479 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.441521 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.441595 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:49Z","lastTransitionTime":"2026-01-26T16:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.540190 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 21:26:29.075052447 +0000 UTC Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.544998 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.545053 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.545067 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.545085 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.545097 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:49Z","lastTransitionTime":"2026-01-26T16:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.565325 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.565367 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.565378 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.565393 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.565404 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:49Z","lastTransitionTime":"2026-01-26T16:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:49 crc kubenswrapper[4856]: E0126 16:59:49.579126 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17523591-a778-4a97-aeab-8a7a93101850\\\",\\\"systemUUID\\\":\\\"ca45d056-99cb-4442-8a44-7e899628ecb2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:49Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.583552 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.583603 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.583618 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.583635 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.583644 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:49Z","lastTransitionTime":"2026-01-26T16:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:49 crc kubenswrapper[4856]: E0126 16:59:49.597626 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17523591-a778-4a97-aeab-8a7a93101850\\\",\\\"systemUUID\\\":\\\"ca45d056-99cb-4442-8a44-7e899628ecb2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:49Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.602362 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.602411 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.602422 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.602441 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.602453 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:49Z","lastTransitionTime":"2026-01-26T16:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:49 crc kubenswrapper[4856]: E0126 16:59:49.617024 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17523591-a778-4a97-aeab-8a7a93101850\\\",\\\"systemUUID\\\":\\\"ca45d056-99cb-4442-8a44-7e899628ecb2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:49Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.621297 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.621339 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.621350 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.621367 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.621380 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:49Z","lastTransitionTime":"2026-01-26T16:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:49 crc kubenswrapper[4856]: E0126 16:59:49.636208 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17523591-a778-4a97-aeab-8a7a93101850\\\",\\\"systemUUID\\\":\\\"ca45d056-99cb-4442-8a44-7e899628ecb2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:49Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.640673 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.640705 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.640714 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.640732 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.640744 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:49Z","lastTransitionTime":"2026-01-26T16:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:49 crc kubenswrapper[4856]: E0126 16:59:49.652399 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17523591-a778-4a97-aeab-8a7a93101850\\\",\\\"systemUUID\\\":\\\"ca45d056-99cb-4442-8a44-7e899628ecb2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:49Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:49 crc kubenswrapper[4856]: E0126 16:59:49.652550 4856 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.654472 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.654501 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.654509 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.654534 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.654544 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:49Z","lastTransitionTime":"2026-01-26T16:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.757258 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.757306 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.757320 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.757338 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.757351 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:49Z","lastTransitionTime":"2026-01-26T16:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.894192 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.894253 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.894268 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.894289 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.894304 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:49Z","lastTransitionTime":"2026-01-26T16:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.996673 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.996727 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.996738 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.996758 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:49 crc kubenswrapper[4856]: I0126 16:59:49.996770 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:49Z","lastTransitionTime":"2026-01-26T16:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.099595 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.099676 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.099696 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.099723 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.099745 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:50Z","lastTransitionTime":"2026-01-26T16:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.202433 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.202489 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.202506 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.202548 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.202565 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:50Z","lastTransitionTime":"2026-01-26T16:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.305789 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.305842 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.305854 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.305870 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.305882 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:50Z","lastTransitionTime":"2026-01-26T16:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.394506 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.394585 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.394630 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.394585 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:59:50 crc kubenswrapper[4856]: E0126 16:59:50.394750 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:59:50 crc kubenswrapper[4856]: E0126 16:59:50.394853 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:59:50 crc kubenswrapper[4856]: E0126 16:59:50.395015 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:59:50 crc kubenswrapper[4856]: E0126 16:59:50.395145 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.409443 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.409494 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.409510 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.409552 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.409567 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:50Z","lastTransitionTime":"2026-01-26T16:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.512160 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.512229 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.512254 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.512285 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.512310 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:50Z","lastTransitionTime":"2026-01-26T16:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.541233 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 15:51:56.746485422 +0000 UTC Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.615514 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.615605 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.615623 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.615688 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.615710 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:50Z","lastTransitionTime":"2026-01-26T16:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.718973 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.719024 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.719039 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.719058 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.719069 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:50Z","lastTransitionTime":"2026-01-26T16:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.821126 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.821164 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.821174 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.821190 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.821201 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:50Z","lastTransitionTime":"2026-01-26T16:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.923497 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.923579 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.923592 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.923610 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:50 crc kubenswrapper[4856]: I0126 16:59:50.923620 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:50Z","lastTransitionTime":"2026-01-26T16:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.026539 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.026580 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.026591 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.026625 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.026638 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:51Z","lastTransitionTime":"2026-01-26T16:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.129993 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.130060 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.130077 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.130103 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.130122 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:51Z","lastTransitionTime":"2026-01-26T16:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.233520 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.233604 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.233615 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.233639 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.233657 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:51Z","lastTransitionTime":"2026-01-26T16:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.336051 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.336120 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.336135 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.336151 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.336168 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:51Z","lastTransitionTime":"2026-01-26T16:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.439739 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.439823 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.439847 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.439877 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.439899 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:51Z","lastTransitionTime":"2026-01-26T16:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.541355 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 12:15:00.701583198 +0000 UTC Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.542765 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.542811 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.542821 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.542838 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.542848 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:51Z","lastTransitionTime":"2026-01-26T16:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.645855 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.645898 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.645906 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.645923 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.645934 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:51Z","lastTransitionTime":"2026-01-26T16:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.749018 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.749567 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.749606 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.749625 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.749637 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:51Z","lastTransitionTime":"2026-01-26T16:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.851350 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.851400 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.851416 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.851434 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.851450 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:51Z","lastTransitionTime":"2026-01-26T16:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.954628 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.954682 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.954701 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.954725 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:51 crc kubenswrapper[4856]: I0126 16:59:51.954744 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:51Z","lastTransitionTime":"2026-01-26T16:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.058385 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.058436 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.058451 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.058470 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.058485 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:52Z","lastTransitionTime":"2026-01-26T16:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.161595 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.161634 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.161655 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.161683 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.161699 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:52Z","lastTransitionTime":"2026-01-26T16:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.265058 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.265092 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.265102 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.265122 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.265136 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:52Z","lastTransitionTime":"2026-01-26T16:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.368726 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.368802 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.368814 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.368831 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.368843 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:52Z","lastTransitionTime":"2026-01-26T16:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.394161 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.394201 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.394207 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.394161 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:59:52 crc kubenswrapper[4856]: E0126 16:59:52.394305 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:59:52 crc kubenswrapper[4856]: E0126 16:59:52.394397 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:59:52 crc kubenswrapper[4856]: E0126 16:59:52.394474 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:59:52 crc kubenswrapper[4856]: E0126 16:59:52.394593 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.471801 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.471846 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.471856 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.471876 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.471888 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:52Z","lastTransitionTime":"2026-01-26T16:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.541715 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 00:19:05.375815469 +0000 UTC Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.574635 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.574668 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.574677 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.574692 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.574701 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:52Z","lastTransitionTime":"2026-01-26T16:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.677124 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.677187 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.677206 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.677231 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.677249 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:52Z","lastTransitionTime":"2026-01-26T16:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.779919 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.780073 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.780099 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.780163 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.780183 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:52Z","lastTransitionTime":"2026-01-26T16:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.883124 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.883164 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.883184 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.883206 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.883225 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:52Z","lastTransitionTime":"2026-01-26T16:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.985077 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.985130 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.985165 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.985182 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:52 crc kubenswrapper[4856]: I0126 16:59:52.985196 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:52Z","lastTransitionTime":"2026-01-26T16:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.088672 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.088745 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.088767 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.088799 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.088822 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:53Z","lastTransitionTime":"2026-01-26T16:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.192217 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.192288 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.192300 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.192322 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.192334 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:53Z","lastTransitionTime":"2026-01-26T16:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.295672 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.295755 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.295797 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.295830 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.295853 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:53Z","lastTransitionTime":"2026-01-26T16:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.398475 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.398547 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.398558 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.398576 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.398660 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:53Z","lastTransitionTime":"2026-01-26T16:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.501235 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.501275 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.501288 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.501304 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.501315 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:53Z","lastTransitionTime":"2026-01-26T16:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.543090 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 17:23:29.074884746 +0000 UTC Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.603398 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.603453 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.603472 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.603488 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.603499 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:53Z","lastTransitionTime":"2026-01-26T16:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.707198 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.707272 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.707296 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.707331 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.707350 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:53Z","lastTransitionTime":"2026-01-26T16:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.810627 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.810683 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.810702 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.810729 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.810747 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:53Z","lastTransitionTime":"2026-01-26T16:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.913648 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.913783 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.913810 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.913844 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:53 crc kubenswrapper[4856]: I0126 16:59:53.913869 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:53Z","lastTransitionTime":"2026-01-26T16:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.017201 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.017246 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.017257 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.017273 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.017284 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:54Z","lastTransitionTime":"2026-01-26T16:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.119943 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.119986 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.119997 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.120012 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.120022 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:54Z","lastTransitionTime":"2026-01-26T16:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.222755 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.222783 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.222793 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.222808 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.222817 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:54Z","lastTransitionTime":"2026-01-26T16:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.325481 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.325563 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.325576 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.325593 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.325603 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:54Z","lastTransitionTime":"2026-01-26T16:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.394368 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.394399 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.394762 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:59:54 crc kubenswrapper[4856]: E0126 16:59:54.394916 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.394978 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 16:59:54 crc kubenswrapper[4856]: E0126 16:59:54.395039 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:59:54 crc kubenswrapper[4856]: E0126 16:59:54.395255 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:59:54 crc kubenswrapper[4856]: E0126 16:59:54.395313 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.428670 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.428745 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.428762 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.428784 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.428800 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:54Z","lastTransitionTime":"2026-01-26T16:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.531293 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.531349 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.531383 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.531412 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.531438 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:54Z","lastTransitionTime":"2026-01-26T16:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.543593 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 08:41:14.922161812 +0000 UTC Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.634393 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.634477 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.634501 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.634568 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.634597 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:54Z","lastTransitionTime":"2026-01-26T16:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.737954 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.738011 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.738020 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.738035 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.738064 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:54Z","lastTransitionTime":"2026-01-26T16:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.840448 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.840501 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.840553 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.840573 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.840589 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:54Z","lastTransitionTime":"2026-01-26T16:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.944047 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.944107 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.944120 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.944139 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:54 crc kubenswrapper[4856]: I0126 16:59:54.944155 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:54Z","lastTransitionTime":"2026-01-26T16:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.047443 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.047572 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.047604 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.047633 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.047652 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:55Z","lastTransitionTime":"2026-01-26T16:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.151088 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.151174 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.151199 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.151229 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.151251 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:55Z","lastTransitionTime":"2026-01-26T16:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.255169 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.255246 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.255263 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.255286 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.255301 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:55Z","lastTransitionTime":"2026-01-26T16:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.359348 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.359416 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.359438 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.359487 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.359513 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:55Z","lastTransitionTime":"2026-01-26T16:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.410699 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77e85f9-b566-4807-bb92-55963c97b93c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba87c9fc35c230bbee201a5176cb467309f0b9aee82dfc81f3b677a15486d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03dc794e9c2035f2e1983eacad3e51d76223cb1b82e2f402c73f9453e4bd2f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-v7579\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.424736 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af4930ff9cebd62106545fb3da2dfc93f7b591426fe4c85aa2e637b60c935f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.442306 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c09748c0963bdb50c9552f249d1135ea53e68c52babd788dad050951cb849cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.460933 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.463243 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.463291 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.463302 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.463320 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.463332 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:55Z","lastTransitionTime":"2026-01-26T16:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.474851 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.487251 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.500473 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq622" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a742e7b-c420-46e3-9e96-e9c744af6124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afeb20035224feeab28a92ac77b43a24e653e49c56a25590a9861019a2b7a8ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:59:45Z\\\",\\\"message\\\":\\\"2026-01-26T16:59:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_41ab0694-d9c8-49a7-bf30-57e732ac7550\\\\n2026-01-26T16:59:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_41ab0694-d9c8-49a7-bf30-57e732ac7550 to /host/opt/cni/bin/\\\\n2026-01-26T16:59:00Z [verbose] multus-daemon started\\\\n2026-01-26T16:59:00Z [verbose] Readiness Indicator file check\\\\n2026-01-26T16:59:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8plh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq622\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.514439 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59ecd87a-c5db-446d-ad3e-cfabbd648c1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89975b8f9428f81ab5d3fb48ced5dd9c837bea2feea3b89f5f7ff8d7d5d15b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40d432159a07ba20bf95f058bf8a597d67cbd8d852519bed035694f8ba3d8ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ba0f131eee14df389391e46bc772894e49e976c3663df5fb3bc98b3cb4a3d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03a01b651d9f66da4dd1f6e0d29ad97c0d6ae46b644c3d997d8dc99476706df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f07438e20bdf71c752bb661084c835341999c561bb5442d75e177223881276f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T16:58:51Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 16:58:43.431198 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 16:58:43.432227 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-99450334/tls.crt::/tmp/serving-cert-99450334/tls.key\\\\\\\"\\\\nI0126 16:58:50.828320 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 16:58:50.830449 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 16:58:50.830472 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 16:58:50.830509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 16:58:50.830515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 16:58:50.834885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 16:58:50.834958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 16:58:50.834991 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 16:58:50.834905 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 16:58:50.835015 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 16:58:50.835084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 16:58:50.835106 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 16:58:50.835116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 16:58:50.837641 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2daa3f13d37b7ae19dfca406b6b5cfdcd4f211287f7d410193f2fee36a24553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.529112 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191ad5d41024c88c1a4fbc30f307dd6340fd55b93b00d22f62209d0e82be286f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf2a4d8be409a46ffe5702797e56a80023a72a19fb7dc5c49e5f4984cedc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.541478 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c823748751e9938f10fab08e33b5fffff5a6d15961ce028204131b7b69a56c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.543729 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 01:32:02.463317825 +0000 UTC Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.557384 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.565794 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.565827 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.565839 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.565855 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.565866 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:55Z","lastTransitionTime":"2026-01-26T16:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.573268 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad7b59f9-beb7-49d6-a2d1-e29133e46854\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc53a6a6d4222ee5e1ac29fd5957d8cdf3fd42de72c68c85329374ce7afc4004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v2l7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.593039 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71cb66e6f52823e7aa63f88ba1d153fde73816120aa75d4a6b910937303d2b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71cb66e6f52823e7aa63f88ba1d153fde73816120aa75d4a6b910937303d2b9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:59:29Z\\\",\\\"message\\\":\\\"n-kubernetes/ovnkube-node-pxh94\\\\nI0126 16:59:29.834990 6550 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-t4fq2\\\\nI0126 16:59:29.834996 6550 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-t4fq2\\\\nI0126 16:59:29.835005 6550 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-t4fq2 in node crc\\\\nI0126 16:59:29.835026 6550 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 16:59:29.835056 6550 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0126 16:59:29.835079 6550 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0126 16:59:29.835090 6550 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0126 16:59:29.835097 6550 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pxh94_openshift-ovn-kubernetes(ab5b6f50-172b-4535-a0f9-5d103bcab4e7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.604709 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-295wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e50462-28e6-4531-ada4-e652310e6cce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-295wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.619590 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc864b0d-83bc-4954-9c61-ad650157caff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbde934a6c8acad10ca3ab8206d0ddbd4f7b17e9d304b898a68f4d3b0303bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b48f763d4aff37169399be766d5ab4f7ebbf91f304d139c9022a8556946eb107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0094e662f53c4832a984e05a880021af05ffc4c27f25394c28a070d9ef5490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb95f623fa4f5f42217649ccce4225f9fe588bb3558da9334eef2b40ddb62486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb95f623fa4f5f42217649ccce4225f9fe588bb3558da9334eef2b40ddb62486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.630890 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c3b0574-b4cc-483d-ae88-6517d1f30772\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9063a7c03990fc26fc47427f164a769fd649c2bdbd9d23ea7f646e569734be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67797295a8c3952902b2696c6fdb26b72ce1826b5ccd522a24aac90a0411b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67797295a8c3952902b2696c6fdb26b72ce1826b5ccd522a24aac90a0411b5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.642994 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63c75ede-5170-4db0-811b-5217ef8d72b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da26ddcfc6ccde3c9aabd63bdef3435f9a9eaab8c095bfeef5670c1295576cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xm9cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:55Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.668541 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.668621 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.668636 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.668663 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.668679 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:55Z","lastTransitionTime":"2026-01-26T16:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.770914 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.770956 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.770972 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.770988 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.771001 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:55Z","lastTransitionTime":"2026-01-26T16:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.874080 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.874137 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.874148 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.874167 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.874200 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:55Z","lastTransitionTime":"2026-01-26T16:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.978816 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.978868 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.978883 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.978905 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:55 crc kubenswrapper[4856]: I0126 16:59:55.978921 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:55Z","lastTransitionTime":"2026-01-26T16:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.082661 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.082715 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.082732 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.082757 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.082774 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:56Z","lastTransitionTime":"2026-01-26T16:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.185373 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.185418 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.185428 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.185441 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.185450 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:56Z","lastTransitionTime":"2026-01-26T16:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.287936 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.288003 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.288026 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.288057 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.288078 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:56Z","lastTransitionTime":"2026-01-26T16:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.391286 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.391345 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.391356 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.391376 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.391390 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:56Z","lastTransitionTime":"2026-01-26T16:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.394645 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.394665 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.394723 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.394822 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 16:59:56 crc kubenswrapper[4856]: E0126 16:59:56.394901 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:59:56 crc kubenswrapper[4856]: E0126 16:59:56.395027 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:59:56 crc kubenswrapper[4856]: E0126 16:59:56.395104 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 16:59:56 crc kubenswrapper[4856]: E0126 16:59:56.395736 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.396101 4856 scope.go:117] "RemoveContainer" containerID="71cb66e6f52823e7aa63f88ba1d153fde73816120aa75d4a6b910937303d2b9e" Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.495098 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.495576 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.495594 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.495619 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.495641 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:56Z","lastTransitionTime":"2026-01-26T16:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.543882 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 21:28:58.629895815 +0000 UTC Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.598332 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.598365 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.598374 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.598388 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.598397 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:56Z","lastTransitionTime":"2026-01-26T16:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.700724 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.700797 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.700820 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.700848 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.700869 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:56Z","lastTransitionTime":"2026-01-26T16:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.803639 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.803709 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.803731 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.803757 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.803776 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:56Z","lastTransitionTime":"2026-01-26T16:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.907142 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.907224 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.907236 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.907258 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:56 crc kubenswrapper[4856]: I0126 16:59:56.907272 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:56Z","lastTransitionTime":"2026-01-26T16:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.011799 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.011890 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.011904 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.011929 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.011944 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:57Z","lastTransitionTime":"2026-01-26T16:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.114969 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.115045 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.115058 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.115077 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.115087 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:57Z","lastTransitionTime":"2026-01-26T16:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.218195 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.218256 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.218271 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.218290 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.218303 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:57Z","lastTransitionTime":"2026-01-26T16:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.321494 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.321580 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.321594 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.321617 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.321634 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:57Z","lastTransitionTime":"2026-01-26T16:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.424640 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.424702 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.424725 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.424750 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.424766 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:57Z","lastTransitionTime":"2026-01-26T16:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.527503 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.527568 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.527583 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.527603 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.527617 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:57Z","lastTransitionTime":"2026-01-26T16:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.544748 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 01:11:39.460528362 +0000 UTC Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.630805 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.630859 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.630875 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.630897 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.630915 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:57Z","lastTransitionTime":"2026-01-26T16:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.733810 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.733871 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.733884 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.733906 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.733921 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:57Z","lastTransitionTime":"2026-01-26T16:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.837068 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.837120 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.837128 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.837145 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.837157 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:57Z","lastTransitionTime":"2026-01-26T16:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.851916 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pxh94_ab5b6f50-172b-4535-a0f9-5d103bcab4e7/ovnkube-controller/2.log" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.854488 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" event={"ID":"ab5b6f50-172b-4535-a0f9-5d103bcab4e7","Type":"ContainerStarted","Data":"203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6"} Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.855477 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.869709 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af4930ff9cebd62106545fb3da2dfc93f7b591426fe4c85aa2e637b60c935f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:57Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.883513 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c09748c0963bdb50c9552f249d1135ea53e68c52babd788dad050951cb849cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:57Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.894950 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:57Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.906107 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77e85f9-b566-4807-bb92-55963c97b93c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba87c9fc35c230bbee201a5176cb467309f0b9aee82dfc81f3b677a15486d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03dc794e9c2035f2e1983eacad3e51d76223cb1b82e2f402c73f9453e4bd2f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-v7579\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:57Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.919113 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59ecd87a-c5db-446d-ad3e-cfabbd648c1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89975b8f9428f81ab5d3fb48ced5dd9c837bea2feea3b89f5f7ff8d7d5d15b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40d432159a07ba20bf95f058bf8a597d67cbd8d852519bed035694f8ba3d8ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ba0f131eee14df389391e46bc772894e49e976c3663df5fb3bc98b3cb4a3d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03a01b651d9f66da4dd1f6e0d29ad97c0d6ae46b644c3d997d8dc99476706df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f07438e20bdf71c752bb661084c835341999c561bb5442d75e177223881276f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T16:58:51Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 16:58:43.431198 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 16:58:43.432227 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-99450334/tls.crt::/tmp/serving-cert-99450334/tls.key\\\\\\\"\\\\nI0126 16:58:50.828320 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 16:58:50.830449 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 16:58:50.830472 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 16:58:50.830509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 16:58:50.830515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 16:58:50.834885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 16:58:50.834958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 16:58:50.834991 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 16:58:50.834905 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 16:58:50.835015 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 16:58:50.835084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 16:58:50.835106 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 16:58:50.835116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 16:58:50.837641 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2daa3f13d37b7ae19dfca406b6b5cfdcd4f211287f7d410193f2fee36a24553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:57Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.934518 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191ad5d41024c88c1a4fbc30f307dd6340fd55b93b00d22f62209d0e82be286f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf2a4d8be409a46ffe5702797e56a80023a72a19fb7dc5c49e5f4984cedc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:57Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.939733 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.939790 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.939808 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.939832 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.939850 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:57Z","lastTransitionTime":"2026-01-26T16:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.949559 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c823748751e9938f10fab08e33b5fffff5a6d15961ce028204131b7b69a56c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:57Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.969257 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:57Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:57 crc kubenswrapper[4856]: I0126 16:59:57.987381 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:57Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.007566 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq622" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a742e7b-c420-46e3-9e96-e9c744af6124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afeb20035224feeab28a92ac77b43a24e653e49c56a25590a9861019a2b7a8ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:59:45Z\\\",\\\"message\\\":\\\"2026-01-26T16:59:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_41ab0694-d9c8-49a7-bf30-57e732ac7550\\\\n2026-01-26T16:59:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_41ab0694-d9c8-49a7-bf30-57e732ac7550 to /host/opt/cni/bin/\\\\n2026-01-26T16:59:00Z [verbose] multus-daemon started\\\\n2026-01-26T16:59:00Z [verbose] Readiness Indicator file check\\\\n2026-01-26T16:59:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8plh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq622\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:58Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.027041 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:58Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.042929 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.042976 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.042987 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.043004 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.043015 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:58Z","lastTransitionTime":"2026-01-26T16:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.047074 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad7b59f9-beb7-49d6-a2d1-e29133e46854\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc53a6a6d4222ee5e1ac29fd5957d8cdf3fd42de72c68c85329374ce7afc4004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v2l7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:58Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.069498 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71cb66e6f52823e7aa63f88ba1d153fde73816120aa75d4a6b910937303d2b9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:59:29Z\\\",\\\"message\\\":\\\"n-kubernetes/ovnkube-node-pxh94\\\\nI0126 16:59:29.834990 6550 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-t4fq2\\\\nI0126 16:59:29.834996 6550 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-t4fq2\\\\nI0126 16:59:29.835005 6550 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-t4fq2 in node crc\\\\nI0126 16:59:29.835026 6550 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 16:59:29.835056 6550 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0126 16:59:29.835079 6550 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0126 16:59:29.835090 6550 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0126 16:59:29.835097 6550 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:58Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.083121 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-295wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e50462-28e6-4531-ada4-e652310e6cce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-295wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:58Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.099830 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc864b0d-83bc-4954-9c61-ad650157caff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbde934a6c8acad10ca3ab8206d0ddbd4f7b17e9d304b898a68f4d3b0303bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b48f763d4aff37169399be766d5ab4f7ebbf91f304d139c9022a8556946eb107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0094e662f53c4832a984e05a880021af05ffc4c27f25394c28a070d9ef5490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb95f623fa4f5f42217649ccce4225f9fe588bb3558da9334eef2b40ddb62486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb95f623fa4f5f42217649ccce4225f9fe588bb3558da9334eef2b40ddb62486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:58Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.111360 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c3b0574-b4cc-483d-ae88-6517d1f30772\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9063a7c03990fc26fc47427f164a769fd649c2bdbd9d23ea7f646e569734be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67797295a8c3952902b2696c6fdb26b72ce1826b5ccd522a24aac90a0411b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67797295a8c3952902b2696c6fdb26b72ce1826b5ccd522a24aac90a0411b5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:58Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.127104 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63c75ede-5170-4db0-811b-5217ef8d72b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da26ddcfc6ccde3c9aabd63bdef3435f9a9eaab8c095bfeef5670c1295576cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xm9cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:58Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.145920 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.145953 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.145963 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.145981 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.146000 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:58Z","lastTransitionTime":"2026-01-26T16:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.248652 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.248711 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.248721 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.248739 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.248754 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:58Z","lastTransitionTime":"2026-01-26T16:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.351927 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.351979 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.351990 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.352011 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.352026 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:58Z","lastTransitionTime":"2026-01-26T16:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.394778 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.394819 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.394878 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:59:58 crc kubenswrapper[4856]: E0126 16:59:58.394939 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 16:59:58 crc kubenswrapper[4856]: E0126 16:59:58.395123 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.395174 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:59:58 crc kubenswrapper[4856]: E0126 16:59:58.395258 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 16:59:58 crc kubenswrapper[4856]: E0126 16:59:58.395320 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.455365 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.455445 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.455469 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.455498 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.455520 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:58Z","lastTransitionTime":"2026-01-26T16:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.545367 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 10:45:51.581499729 +0000 UTC Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.558907 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.558984 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.559009 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.559037 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.559058 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:58Z","lastTransitionTime":"2026-01-26T16:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.660980 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.661035 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.661051 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.661076 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.661096 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:58Z","lastTransitionTime":"2026-01-26T16:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.699629 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 16:59:58 crc kubenswrapper[4856]: E0126 16:59:58.699792 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:02.699760151 +0000 UTC m=+158.653014172 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.699845 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.699929 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.699992 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 16:59:58 crc kubenswrapper[4856]: E0126 16:59:58.700014 4856 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.700041 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 16:59:58 crc kubenswrapper[4856]: E0126 16:59:58.700064 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 17:01:02.70005169 +0000 UTC m=+158.653305671 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 16:59:58 crc kubenswrapper[4856]: E0126 16:59:58.700185 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 16:59:58 crc kubenswrapper[4856]: E0126 16:59:58.700202 4856 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 16:59:58 crc kubenswrapper[4856]: E0126 16:59:58.700280 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 16:59:58 crc kubenswrapper[4856]: E0126 16:59:58.700321 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 16:59:58 crc kubenswrapper[4856]: E0126 16:59:58.700341 4856 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:59:58 crc kubenswrapper[4856]: E0126 16:59:58.700377 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 17:01:02.700336679 +0000 UTC m=+158.653590700 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 16:59:58 crc kubenswrapper[4856]: E0126 16:59:58.700409 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 17:01:02.700390281 +0000 UTC m=+158.653644302 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:59:58 crc kubenswrapper[4856]: E0126 16:59:58.700223 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 16:59:58 crc kubenswrapper[4856]: E0126 16:59:58.700459 4856 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:59:58 crc kubenswrapper[4856]: E0126 16:59:58.700568 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 17:01:02.700519694 +0000 UTC m=+158.653773715 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.763452 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.763495 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.763506 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.763562 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.763576 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:58Z","lastTransitionTime":"2026-01-26T16:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.865867 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.865915 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.865944 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.865961 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.865975 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:58Z","lastTransitionTime":"2026-01-26T16:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.969160 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.969241 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.969264 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.969299 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:58 crc kubenswrapper[4856]: I0126 16:59:58.969326 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:58Z","lastTransitionTime":"2026-01-26T16:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.072786 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.072863 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.072875 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.072892 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.072904 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:59Z","lastTransitionTime":"2026-01-26T16:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.175853 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.175931 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.175948 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.175979 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.175998 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:59Z","lastTransitionTime":"2026-01-26T16:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.279270 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.279336 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.279357 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.279387 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.279411 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:59Z","lastTransitionTime":"2026-01-26T16:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.382575 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.382657 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.382675 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.382706 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.382725 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:59Z","lastTransitionTime":"2026-01-26T16:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.486600 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.486663 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.486681 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.486705 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.486722 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:59Z","lastTransitionTime":"2026-01-26T16:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.545972 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 01:17:42.634479283 +0000 UTC Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.590883 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.590918 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.590928 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.590944 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.590953 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:59Z","lastTransitionTime":"2026-01-26T16:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.694307 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.694365 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.694382 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.694405 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.694423 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:59Z","lastTransitionTime":"2026-01-26T16:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.797730 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.797757 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.797765 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.797779 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.797788 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:59Z","lastTransitionTime":"2026-01-26T16:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.865450 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pxh94_ab5b6f50-172b-4535-a0f9-5d103bcab4e7/ovnkube-controller/3.log" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.866648 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pxh94_ab5b6f50-172b-4535-a0f9-5d103bcab4e7/ovnkube-controller/2.log" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.871510 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.871640 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.871669 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.871706 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.871732 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:59Z","lastTransitionTime":"2026-01-26T16:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.873784 4856 generic.go:334] "Generic (PLEG): container finished" podID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerID="203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6" exitCode=1 Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.873862 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" event={"ID":"ab5b6f50-172b-4535-a0f9-5d103bcab4e7","Type":"ContainerDied","Data":"203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6"} Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.874231 4856 scope.go:117] "RemoveContainer" containerID="71cb66e6f52823e7aa63f88ba1d153fde73816120aa75d4a6b910937303d2b9e" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.875386 4856 scope.go:117] "RemoveContainer" containerID="203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6" Jan 26 16:59:59 crc kubenswrapper[4856]: E0126 16:59:59.875636 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pxh94_openshift-ovn-kubernetes(ab5b6f50-172b-4535-a0f9-5d103bcab4e7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" Jan 26 16:59:59 crc kubenswrapper[4856]: E0126 16:59:59.896262 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17523591-a778-4a97-aeab-8a7a93101850\\\",\\\"systemUUID\\\":\\\"ca45d056-99cb-4442-8a44-7e899628ecb2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.899825 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc864b0d-83bc-4954-9c61-ad650157caff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbde934a6c8acad10ca3ab8206d0ddbd4f7b17e9d304b898a68f4d3b0303bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b48f763d4aff37169399be766d5ab4f7ebbf91f304d139c9022a8556946eb107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0094e662f53c4832a984e05a880021af05ffc4c27f25394c28a070d9ef5490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb95f623fa4f5f42217649ccce4225f9fe588bb3558da9334eef2b40ddb62486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb95f623fa4f5f42217649ccce4225f9fe588bb3558da9334eef2b40ddb62486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.906110 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.906210 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.906228 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.906280 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.906295 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:59Z","lastTransitionTime":"2026-01-26T16:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.921460 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c3b0574-b4cc-483d-ae88-6517d1f30772\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9063a7c03990fc26fc47427f164a769fd649c2bdbd9d23ea7f646e569734be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67797295a8c3952902b2696c6fdb26b72ce1826b5ccd522a24aac90a0411b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67797295a8c3952902b2696c6fdb26b72ce1826b5ccd522a24aac90a0411b5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:59 crc kubenswrapper[4856]: E0126 16:59:59.928165 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17523591-a778-4a97-aeab-8a7a93101850\\\",\\\"systemUUID\\\":\\\"ca45d056-99cb-4442-8a44-7e899628ecb2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.934203 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.934275 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.934298 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.934326 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.934344 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:59Z","lastTransitionTime":"2026-01-26T16:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.939051 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63c75ede-5170-4db0-811b-5217ef8d72b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da26ddcfc6ccde3c9aabd63bdef3435f9a9eaab8c095bfeef5670c1295576cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xm9cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.955359 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:59 crc kubenswrapper[4856]: E0126 16:59:59.956559 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17523591-a778-4a97-aeab-8a7a93101850\\\",\\\"systemUUID\\\":\\\"ca45d056-99cb-4442-8a44-7e899628ecb2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.961870 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.961909 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.961921 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.961937 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.961946 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:59Z","lastTransitionTime":"2026-01-26T16:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.971278 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77e85f9-b566-4807-bb92-55963c97b93c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba87c9fc35c230bbee201a5176cb467309f0b9aee82dfc81f3b677a15486d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03dc794e9c2035f2e1983eacad3e51d76223cb1b82e2f402c73f9453e4bd2f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-v7579\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:59 crc kubenswrapper[4856]: E0126 16:59:59.974816 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17523591-a778-4a97-aeab-8a7a93101850\\\",\\\"systemUUID\\\":\\\"ca45d056-99cb-4442-8a44-7e899628ecb2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.982095 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af4930ff9cebd62106545fb3da2dfc93f7b591426fe4c85aa2e637b60c935f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.982976 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.983005 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.983017 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.983034 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.983044 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T16:59:59Z","lastTransitionTime":"2026-01-26T16:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 16:59:59 crc kubenswrapper[4856]: I0126 16:59:59.997247 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c09748c0963bdb50c9552f249d1135ea53e68c52babd788dad050951cb849cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:59 crc kubenswrapper[4856]: E0126 16:59:59.998899 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T16:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17523591-a778-4a97-aeab-8a7a93101850\\\",\\\"systemUUID\\\":\\\"ca45d056-99cb-4442-8a44-7e899628ecb2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T16:59:59Z is after 2025-08-24T17:21:41Z" Jan 26 16:59:59 crc kubenswrapper[4856]: E0126 16:59:59.999001 4856 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.000859 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.000876 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.000884 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.000896 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.000905 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:00Z","lastTransitionTime":"2026-01-26T17:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.009929 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c823748751e9938f10fab08e33b5fffff5a6d15961ce028204131b7b69a56c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:00Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.026492 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:00Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.038414 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:00Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.074308 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq622" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a742e7b-c420-46e3-9e96-e9c744af6124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afeb20035224feeab28a92ac77b43a24e653e49c56a25590a9861019a2b7a8ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:59:45Z\\\",\\\"message\\\":\\\"2026-01-26T16:59:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_41ab0694-d9c8-49a7-bf30-57e732ac7550\\\\n2026-01-26T16:59:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_41ab0694-d9c8-49a7-bf30-57e732ac7550 to /host/opt/cni/bin/\\\\n2026-01-26T16:59:00Z [verbose] multus-daemon started\\\\n2026-01-26T16:59:00Z [verbose] Readiness Indicator file check\\\\n2026-01-26T16:59:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8plh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq622\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:00Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.100613 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59ecd87a-c5db-446d-ad3e-cfabbd648c1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89975b8f9428f81ab5d3fb48ced5dd9c837bea2feea3b89f5f7ff8d7d5d15b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40d432159a07ba20bf95f058bf8a597d67cbd8d852519bed035694f8ba3d8ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ba0f131eee14df389391e46bc772894e49e976c3663df5fb3bc98b3cb4a3d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03a01b651d9f66da4dd1f6e0d29ad97c0d6ae46b644c3d997d8dc99476706df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f07438e20bdf71c752bb661084c835341999c561bb5442d75e177223881276f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T16:58:51Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 16:58:43.431198 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 16:58:43.432227 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-99450334/tls.crt::/tmp/serving-cert-99450334/tls.key\\\\\\\"\\\\nI0126 16:58:50.828320 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 16:58:50.830449 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 16:58:50.830472 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 16:58:50.830509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 16:58:50.830515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 16:58:50.834885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 16:58:50.834958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 16:58:50.834991 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 16:58:50.834905 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 16:58:50.835015 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 16:58:50.835084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 16:58:50.835106 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 16:58:50.835116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 16:58:50.837641 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2daa3f13d37b7ae19dfca406b6b5cfdcd4f211287f7d410193f2fee36a24553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:00Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.103592 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.103637 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.103649 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.103669 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.103681 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:00Z","lastTransitionTime":"2026-01-26T17:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.122286 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191ad5d41024c88c1a4fbc30f307dd6340fd55b93b00d22f62209d0e82be286f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf2a4d8be409a46ffe5702797e56a80023a72a19fb7dc5c49e5f4984cedc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:00Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.138237 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-295wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e50462-28e6-4531-ada4-e652310e6cce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-295wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:00Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.159734 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:00Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.177362 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad7b59f9-beb7-49d6-a2d1-e29133e46854\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc53a6a6d4222ee5e1ac29fd5957d8cdf3fd42de72c68c85329374ce7afc4004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v2l7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:00Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.198750 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71cb66e6f52823e7aa63f88ba1d153fde73816120aa75d4a6b910937303d2b9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:59:29Z\\\",\\\"message\\\":\\\"n-kubernetes/ovnkube-node-pxh94\\\\nI0126 16:59:29.834990 6550 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-t4fq2\\\\nI0126 16:59:29.834996 6550 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-t4fq2\\\\nI0126 16:59:29.835005 6550 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-t4fq2 in node crc\\\\nI0126 16:59:29.835026 6550 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 16:59:29.835056 6550 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0126 16:59:29.835079 6550 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0126 16:59:29.835090 6550 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0126 16:59:29.835097 6550 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:59:58Z\\\",\\\"message\\\":\\\" server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 16:59:57.729092 6886 port_cache.go:96] port-cache(openshift-network-diagnostics_network-check-target-xd92c): added port \\\\u0026{name:openshift-network-diagnostics_network-check-target-xd92c uuid:61897e97-c771-4738-8709-09636387cb00 logicalSwitch:crc ips:[0xc008f54210] mac:[10 88 10 217 0 4] expires:{wall:0 ext:0 loc:\\\\u003cnil\\\\u003e}} with IP: [10.217.0.4/23] and MAC: 0a:58:0a:d9:00:04\\\\nI0126 16:59:57.729124 6886 pods.go:252] [openshift-network-diagnostics/network-check-target-xd92c] addLogicalPort took 1.706822ms, libovsdb time 885.597µs\\\\nI0126 16:59:57.729132 6886 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-diagnostics/network-check-target-xd92c after 0 failed attempt(s)\\\\nF0126 16:59:57.729134 6886 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:00Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.206429 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.206494 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.206512 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.206582 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.206600 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:00Z","lastTransitionTime":"2026-01-26T17:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.308956 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.308985 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.308995 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.309010 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.309020 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:00Z","lastTransitionTime":"2026-01-26T17:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.395005 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.395066 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.395105 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.395080 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:00:00 crc kubenswrapper[4856]: E0126 17:00:00.395150 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:00:00 crc kubenswrapper[4856]: E0126 17:00:00.395222 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 17:00:00 crc kubenswrapper[4856]: E0126 17:00:00.395273 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:00:00 crc kubenswrapper[4856]: E0126 17:00:00.395325 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.411122 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.411167 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.411180 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.411196 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.411209 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:00Z","lastTransitionTime":"2026-01-26T17:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.514740 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.514804 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.514828 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.514881 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.514906 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:00Z","lastTransitionTime":"2026-01-26T17:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.546266 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 09:12:06.336930725 +0000 UTC Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.621669 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.621792 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.621812 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.622437 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.622482 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:00Z","lastTransitionTime":"2026-01-26T17:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.725215 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.725255 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.725265 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.725278 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.725288 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:00Z","lastTransitionTime":"2026-01-26T17:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.828247 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.828297 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.828310 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.828328 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.828339 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:00Z","lastTransitionTime":"2026-01-26T17:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.880783 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pxh94_ab5b6f50-172b-4535-a0f9-5d103bcab4e7/ovnkube-controller/3.log" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.930928 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.931000 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.931020 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.931047 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:00 crc kubenswrapper[4856]: I0126 17:00:00.931065 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:00Z","lastTransitionTime":"2026-01-26T17:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.033924 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.033965 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.033975 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.034017 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.034030 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:01Z","lastTransitionTime":"2026-01-26T17:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.136194 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.136235 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.136246 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.136263 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.136280 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:01Z","lastTransitionTime":"2026-01-26T17:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.238849 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.238884 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.238893 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.238906 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.238915 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:01Z","lastTransitionTime":"2026-01-26T17:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.341090 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.341476 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.341492 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.341510 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.341520 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:01Z","lastTransitionTime":"2026-01-26T17:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.444217 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.444259 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.444272 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.444289 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.444300 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:01Z","lastTransitionTime":"2026-01-26T17:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.546061 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.546098 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.546109 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.546124 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.546135 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:01Z","lastTransitionTime":"2026-01-26T17:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.546453 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 08:14:26.676118664 +0000 UTC Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.648585 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.648626 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.648635 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.648650 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.648660 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:01Z","lastTransitionTime":"2026-01-26T17:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.752058 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.752108 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.752122 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.752144 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.752159 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:01Z","lastTransitionTime":"2026-01-26T17:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.854222 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.854263 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.854272 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.854286 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.854295 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:01Z","lastTransitionTime":"2026-01-26T17:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.958120 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.958166 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.958181 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.958202 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:01 crc kubenswrapper[4856]: I0126 17:00:01.958217 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:01Z","lastTransitionTime":"2026-01-26T17:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.060832 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.060874 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.060883 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.060898 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.060908 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:02Z","lastTransitionTime":"2026-01-26T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.163403 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.163468 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.163483 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.163504 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.163551 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:02Z","lastTransitionTime":"2026-01-26T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.266205 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.266261 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.266272 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.266292 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.266309 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:02Z","lastTransitionTime":"2026-01-26T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.369071 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.369134 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.369152 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.369175 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.369192 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:02Z","lastTransitionTime":"2026-01-26T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.394351 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.394444 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.394477 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.394514 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:00:02 crc kubenswrapper[4856]: E0126 17:00:02.394709 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:00:02 crc kubenswrapper[4856]: E0126 17:00:02.394868 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 17:00:02 crc kubenswrapper[4856]: E0126 17:00:02.394977 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:00:02 crc kubenswrapper[4856]: E0126 17:00:02.395130 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.471883 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.471932 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.471949 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.471968 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.471983 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:02Z","lastTransitionTime":"2026-01-26T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.547602 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 05:07:47.139019889 +0000 UTC Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.574797 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.574839 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.574849 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.574867 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.574879 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:02Z","lastTransitionTime":"2026-01-26T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.678332 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.678395 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.678416 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.678442 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.678458 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:02Z","lastTransitionTime":"2026-01-26T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.781743 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.781807 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.781825 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.781851 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.781869 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:02Z","lastTransitionTime":"2026-01-26T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.885206 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.885278 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.885312 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.885341 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.885362 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:02Z","lastTransitionTime":"2026-01-26T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.988256 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.988312 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.988330 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.988356 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:02 crc kubenswrapper[4856]: I0126 17:00:02.988372 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:02Z","lastTransitionTime":"2026-01-26T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.090379 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.090420 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.090432 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.090448 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.090459 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:03Z","lastTransitionTime":"2026-01-26T17:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.192964 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.193012 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.193023 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.193041 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.193053 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:03Z","lastTransitionTime":"2026-01-26T17:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.296316 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.296366 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.296377 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.296393 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.296403 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:03Z","lastTransitionTime":"2026-01-26T17:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.399778 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.399815 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.399827 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.399855 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.399868 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:03Z","lastTransitionTime":"2026-01-26T17:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.502831 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.502868 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.502886 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.502905 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.502916 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:03Z","lastTransitionTime":"2026-01-26T17:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.548111 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 17:49:15.869698335 +0000 UTC Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.605949 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.606005 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.606017 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.606034 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.606046 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:03Z","lastTransitionTime":"2026-01-26T17:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.709396 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.709480 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.709506 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.709580 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.709620 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:03Z","lastTransitionTime":"2026-01-26T17:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.813184 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.813222 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.813234 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.813250 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.813262 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:03Z","lastTransitionTime":"2026-01-26T17:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.915171 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.915272 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.915297 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.915327 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:03 crc kubenswrapper[4856]: I0126 17:00:03.915352 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:03Z","lastTransitionTime":"2026-01-26T17:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.018408 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.018462 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.018479 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.018500 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.018513 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:04Z","lastTransitionTime":"2026-01-26T17:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.121640 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.121713 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.121735 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.121763 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.121783 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:04Z","lastTransitionTime":"2026-01-26T17:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.224777 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.224828 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.224846 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.224868 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.224883 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:04Z","lastTransitionTime":"2026-01-26T17:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.328897 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.328971 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.328995 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.329025 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.329051 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:04Z","lastTransitionTime":"2026-01-26T17:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.394861 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.394898 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.394988 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:00:04 crc kubenswrapper[4856]: E0126 17:00:04.395076 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.395146 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:00:04 crc kubenswrapper[4856]: E0126 17:00:04.395247 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:00:04 crc kubenswrapper[4856]: E0126 17:00:04.395356 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:00:04 crc kubenswrapper[4856]: E0126 17:00:04.395516 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.431839 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.431896 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.431914 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.431936 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.431954 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:04Z","lastTransitionTime":"2026-01-26T17:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.534596 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.534655 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.534672 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.534693 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.534711 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:04Z","lastTransitionTime":"2026-01-26T17:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.549178 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 13:13:36.824011002 +0000 UTC Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.637481 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.637564 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.637581 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.637604 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.637620 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:04Z","lastTransitionTime":"2026-01-26T17:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.739749 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.739814 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.739831 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.739859 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.739877 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:04Z","lastTransitionTime":"2026-01-26T17:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.843139 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.843199 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.843238 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.843273 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.843291 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:04Z","lastTransitionTime":"2026-01-26T17:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.945776 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.945821 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.945831 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.945845 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:04 crc kubenswrapper[4856]: I0126 17:00:04.945853 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:04Z","lastTransitionTime":"2026-01-26T17:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.048555 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.048608 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.048624 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.048644 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.048658 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:05Z","lastTransitionTime":"2026-01-26T17:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.151305 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.151357 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.151369 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.151387 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.151400 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:05Z","lastTransitionTime":"2026-01-26T17:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.253505 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.253582 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.253599 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.253621 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.253638 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:05Z","lastTransitionTime":"2026-01-26T17:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.357125 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.357186 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.357206 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.357229 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.357245 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:05Z","lastTransitionTime":"2026-01-26T17:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.425619 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71cb66e6f52823e7aa63f88ba1d153fde73816120aa75d4a6b910937303d2b9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:59:29Z\\\",\\\"message\\\":\\\"n-kubernetes/ovnkube-node-pxh94\\\\nI0126 16:59:29.834990 6550 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-t4fq2\\\\nI0126 16:59:29.834996 6550 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-t4fq2\\\\nI0126 16:59:29.835005 6550 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-t4fq2 in node crc\\\\nI0126 16:59:29.835026 6550 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 16:59:29.835056 6550 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0126 16:59:29.835079 6550 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0126 16:59:29.835090 6550 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0126 16:59:29.835097 6550 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:59:58Z\\\",\\\"message\\\":\\\" server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 16:59:57.729092 6886 port_cache.go:96] port-cache(openshift-network-diagnostics_network-check-target-xd92c): added port \\\\u0026{name:openshift-network-diagnostics_network-check-target-xd92c uuid:61897e97-c771-4738-8709-09636387cb00 logicalSwitch:crc ips:[0xc008f54210] mac:[10 88 10 217 0 4] expires:{wall:0 ext:0 loc:\\\\u003cnil\\\\u003e}} with IP: [10.217.0.4/23] and MAC: 0a:58:0a:d9:00:04\\\\nI0126 16:59:57.729124 6886 pods.go:252] [openshift-network-diagnostics/network-check-target-xd92c] addLogicalPort took 1.706822ms, libovsdb time 885.597µs\\\\nI0126 16:59:57.729132 6886 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-diagnostics/network-check-target-xd92c after 0 failed attempt(s)\\\\nF0126 16:59:57.729134 6886 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:05Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.440410 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-295wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e50462-28e6-4531-ada4-e652310e6cce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-295wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:05Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.455002 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:05Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.459580 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.459627 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.459641 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.459658 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.459669 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:05Z","lastTransitionTime":"2026-01-26T17:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.470359 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad7b59f9-beb7-49d6-a2d1-e29133e46854\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc53a6a6d4222ee5e1ac29fd5957d8cdf3fd42de72c68c85329374ce7afc4004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v2l7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:05Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.482355 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63c75ede-5170-4db0-811b-5217ef8d72b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da26ddcfc6ccde3c9aabd63bdef3435f9a9eaab8c095bfeef5670c1295576cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xm9cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:05Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.494738 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc864b0d-83bc-4954-9c61-ad650157caff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbde934a6c8acad10ca3ab8206d0ddbd4f7b17e9d304b898a68f4d3b0303bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b48f763d4aff37169399be766d5ab4f7ebbf91f304d139c9022a8556946eb107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0094e662f53c4832a984e05a880021af05ffc4c27f25394c28a070d9ef5490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb95f623fa4f5f42217649ccce4225f9fe588bb3558da9334eef2b40ddb62486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb95f623fa4f5f42217649ccce4225f9fe588bb3558da9334eef2b40ddb62486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:05Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.505308 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c3b0574-b4cc-483d-ae88-6517d1f30772\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9063a7c03990fc26fc47427f164a769fd649c2bdbd9d23ea7f646e569734be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67797295a8c3952902b2696c6fdb26b72ce1826b5ccd522a24aac90a0411b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67797295a8c3952902b2696c6fdb26b72ce1826b5ccd522a24aac90a0411b5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:05Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.523200 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c09748c0963bdb50c9552f249d1135ea53e68c52babd788dad050951cb849cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:05Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.536482 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:05Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.548266 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77e85f9-b566-4807-bb92-55963c97b93c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba87c9fc35c230bbee201a5176cb467309f0b9aee82dfc81f3b677a15486d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03dc794e9c2035f2e1983eacad3e51d76223cb1b82e2f402c73f9453e4bd2f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-v7579\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:05Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.550359 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 01:54:03.416173542 +0000 UTC Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.559599 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af4930ff9cebd62106545fb3da2dfc93f7b591426fe4c85aa2e637b60c935f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:05Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.562228 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.562254 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.562262 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.562275 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.562284 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:05Z","lastTransitionTime":"2026-01-26T17:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.577692 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191ad5d41024c88c1a4fbc30f307dd6340fd55b93b00d22f62209d0e82be286f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf2a4d8be409a46ffe5702797e56a80023a72a19fb7dc5c49e5f4984cedc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:05Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.590702 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c823748751e9938f10fab08e33b5fffff5a6d15961ce028204131b7b69a56c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:05Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.606974 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:05Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.621714 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:05Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.638831 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq622" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a742e7b-c420-46e3-9e96-e9c744af6124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afeb20035224feeab28a92ac77b43a24e653e49c56a25590a9861019a2b7a8ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:59:45Z\\\",\\\"message\\\":\\\"2026-01-26T16:59:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_41ab0694-d9c8-49a7-bf30-57e732ac7550\\\\n2026-01-26T16:59:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_41ab0694-d9c8-49a7-bf30-57e732ac7550 to /host/opt/cni/bin/\\\\n2026-01-26T16:59:00Z [verbose] multus-daemon started\\\\n2026-01-26T16:59:00Z [verbose] Readiness Indicator file check\\\\n2026-01-26T16:59:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8plh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq622\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:05Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.656304 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59ecd87a-c5db-446d-ad3e-cfabbd648c1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89975b8f9428f81ab5d3fb48ced5dd9c837bea2feea3b89f5f7ff8d7d5d15b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40d432159a07ba20bf95f058bf8a597d67cbd8d852519bed035694f8ba3d8ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ba0f131eee14df389391e46bc772894e49e976c3663df5fb3bc98b3cb4a3d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03a01b651d9f66da4dd1f6e0d29ad97c0d6ae46b644c3d997d8dc99476706df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f07438e20bdf71c752bb661084c835341999c561bb5442d75e177223881276f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T16:58:51Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 16:58:43.431198 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 16:58:43.432227 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-99450334/tls.crt::/tmp/serving-cert-99450334/tls.key\\\\\\\"\\\\nI0126 16:58:50.828320 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 16:58:50.830449 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 16:58:50.830472 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 16:58:50.830509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 16:58:50.830515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 16:58:50.834885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 16:58:50.834958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 16:58:50.834991 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 16:58:50.834905 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 16:58:50.835015 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 16:58:50.835084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 16:58:50.835106 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 16:58:50.835116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 16:58:50.837641 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2daa3f13d37b7ae19dfca406b6b5cfdcd4f211287f7d410193f2fee36a24553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:05Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.664885 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.664937 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.664955 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.664978 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.664996 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:05Z","lastTransitionTime":"2026-01-26T17:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.767397 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.767466 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.767502 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.767519 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.767556 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:05Z","lastTransitionTime":"2026-01-26T17:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.871899 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.871966 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.871984 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.872013 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.872031 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:05Z","lastTransitionTime":"2026-01-26T17:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.976285 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.976322 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.976333 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.976348 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:05 crc kubenswrapper[4856]: I0126 17:00:05.976360 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:05Z","lastTransitionTime":"2026-01-26T17:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.079353 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.079415 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.079431 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.079451 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.079466 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:06Z","lastTransitionTime":"2026-01-26T17:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.182128 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.182204 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.182228 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.182260 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.182287 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:06Z","lastTransitionTime":"2026-01-26T17:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.285803 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.285867 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.285906 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.285940 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.285963 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:06Z","lastTransitionTime":"2026-01-26T17:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.388814 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.388887 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.388907 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.388931 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.388950 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:06Z","lastTransitionTime":"2026-01-26T17:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.395151 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.395185 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.395215 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 17:00:06 crc kubenswrapper[4856]: E0126 17:00:06.395283 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.395298 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:00:06 crc kubenswrapper[4856]: E0126 17:00:06.395464 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:00:06 crc kubenswrapper[4856]: E0126 17:00:06.395657 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:00:06 crc kubenswrapper[4856]: E0126 17:00:06.395796 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.491658 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.491858 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.491890 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.491914 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.491931 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:06Z","lastTransitionTime":"2026-01-26T17:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.551001 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 16:40:08.933451497 +0000 UTC Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.595163 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.595237 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.595256 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.595280 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.595300 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:06Z","lastTransitionTime":"2026-01-26T17:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.698025 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.698086 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.698097 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.698118 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.698129 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:06Z","lastTransitionTime":"2026-01-26T17:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.801059 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.801092 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.801100 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.801112 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.801121 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:06Z","lastTransitionTime":"2026-01-26T17:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.904168 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.904222 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.904239 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.904265 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:06 crc kubenswrapper[4856]: I0126 17:00:06.904284 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:06Z","lastTransitionTime":"2026-01-26T17:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.007476 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.007580 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.007608 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.007642 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.007666 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:07Z","lastTransitionTime":"2026-01-26T17:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.110492 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.110579 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.110615 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.110652 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.110673 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:07Z","lastTransitionTime":"2026-01-26T17:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.214125 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.214196 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.214221 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.214250 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.214272 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:07Z","lastTransitionTime":"2026-01-26T17:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.317334 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.317509 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.317571 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.317640 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.317666 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:07Z","lastTransitionTime":"2026-01-26T17:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.421797 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.422898 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.422958 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.422975 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.422999 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.423019 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:07Z","lastTransitionTime":"2026-01-26T17:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.526317 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.526367 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.526406 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.526438 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.526463 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:07Z","lastTransitionTime":"2026-01-26T17:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.551854 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 05:32:43.980500834 +0000 UTC Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.629297 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.629329 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.629337 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.629350 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.629358 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:07Z","lastTransitionTime":"2026-01-26T17:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.732011 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.732059 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.732069 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.732084 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.732095 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:07Z","lastTransitionTime":"2026-01-26T17:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.835096 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.835203 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.835238 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.835279 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.835303 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:07Z","lastTransitionTime":"2026-01-26T17:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.937599 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.937721 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.937737 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.937760 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:07 crc kubenswrapper[4856]: I0126 17:00:07.937776 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:07Z","lastTransitionTime":"2026-01-26T17:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.040092 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.040146 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.040164 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.040186 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.040202 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:08Z","lastTransitionTime":"2026-01-26T17:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.143513 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.143588 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.143600 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.143617 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.143630 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:08Z","lastTransitionTime":"2026-01-26T17:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.245221 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.245254 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.245264 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.245277 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.245286 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:08Z","lastTransitionTime":"2026-01-26T17:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.348825 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.348885 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.348904 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.348947 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.348973 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:08Z","lastTransitionTime":"2026-01-26T17:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.395465 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:00:08 crc kubenswrapper[4856]: E0126 17:00:08.406341 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.407129 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:00:08 crc kubenswrapper[4856]: E0126 17:00:08.407231 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.407275 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 17:00:08 crc kubenswrapper[4856]: E0126 17:00:08.407324 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.407359 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:00:08 crc kubenswrapper[4856]: E0126 17:00:08.407399 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.423717 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.451775 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.451814 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.451823 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.451843 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.451854 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:08Z","lastTransitionTime":"2026-01-26T17:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.552334 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 05:09:34.646750944 +0000 UTC Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.554351 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.554406 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.554422 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.554444 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.554459 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:08Z","lastTransitionTime":"2026-01-26T17:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.657566 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.657625 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.657640 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.657664 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.657678 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:08Z","lastTransitionTime":"2026-01-26T17:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.760865 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.760909 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.760919 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.760934 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.760944 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:08Z","lastTransitionTime":"2026-01-26T17:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.863951 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.864006 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.864024 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.864048 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.864060 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:08Z","lastTransitionTime":"2026-01-26T17:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.967177 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.967227 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.967237 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.967256 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:08 crc kubenswrapper[4856]: I0126 17:00:08.967267 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:08Z","lastTransitionTime":"2026-01-26T17:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.069346 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.069412 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.069424 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.069438 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.069447 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:09Z","lastTransitionTime":"2026-01-26T17:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.173030 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.173076 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.173088 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.173112 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.173126 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:09Z","lastTransitionTime":"2026-01-26T17:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.276496 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.276586 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.276602 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.276628 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.276649 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:09Z","lastTransitionTime":"2026-01-26T17:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.379572 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.379617 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.379632 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.379654 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.379672 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:09Z","lastTransitionTime":"2026-01-26T17:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.482739 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.482809 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.482818 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.482864 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.482877 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:09Z","lastTransitionTime":"2026-01-26T17:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.553034 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 13:46:06.645148637 +0000 UTC Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.586175 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.586248 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.586274 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.586304 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.586327 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:09Z","lastTransitionTime":"2026-01-26T17:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.691081 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.691183 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.691210 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.691260 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.691287 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:09Z","lastTransitionTime":"2026-01-26T17:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.794307 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.794351 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.794362 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.794375 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.794384 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:09Z","lastTransitionTime":"2026-01-26T17:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.897196 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.897307 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.897328 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.897351 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:09 crc kubenswrapper[4856]: I0126 17:00:09.897365 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:09Z","lastTransitionTime":"2026-01-26T17:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.000769 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.000820 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.000832 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.000855 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.000868 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:10Z","lastTransitionTime":"2026-01-26T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.104083 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.104150 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.104160 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.104182 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.104194 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:10Z","lastTransitionTime":"2026-01-26T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.115789 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.115860 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.115872 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.115894 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.115909 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:10Z","lastTransitionTime":"2026-01-26T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:10 crc kubenswrapper[4856]: E0126 17:00:10.131350 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:00:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:00:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:00:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:00:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:00:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:00:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:00:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:00:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17523591-a778-4a97-aeab-8a7a93101850\\\",\\\"systemUUID\\\":\\\"ca45d056-99cb-4442-8a44-7e899628ecb2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:10Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.136316 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.136384 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.136411 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.136444 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.136473 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:10Z","lastTransitionTime":"2026-01-26T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:10 crc kubenswrapper[4856]: E0126 17:00:10.155757 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:00:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:00:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:00:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:00:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:00:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:00:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:00:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:00:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17523591-a778-4a97-aeab-8a7a93101850\\\",\\\"systemUUID\\\":\\\"ca45d056-99cb-4442-8a44-7e899628ecb2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:10Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.161722 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.161755 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.161765 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.161781 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.161792 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:10Z","lastTransitionTime":"2026-01-26T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:10 crc kubenswrapper[4856]: E0126 17:00:10.177744 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:00:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:00:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:00:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:00:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:00:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:00:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:00:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:00:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17523591-a778-4a97-aeab-8a7a93101850\\\",\\\"systemUUID\\\":\\\"ca45d056-99cb-4442-8a44-7e899628ecb2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:10Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.182417 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.182463 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.182475 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.182495 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.182508 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:10Z","lastTransitionTime":"2026-01-26T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:10 crc kubenswrapper[4856]: E0126 17:00:10.197713 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:00:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:00:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:00:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:00:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:00:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:00:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:00:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:00:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17523591-a778-4a97-aeab-8a7a93101850\\\",\\\"systemUUID\\\":\\\"ca45d056-99cb-4442-8a44-7e899628ecb2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:10Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.202159 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.202199 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.202210 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.202228 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.202241 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:10Z","lastTransitionTime":"2026-01-26T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:10 crc kubenswrapper[4856]: E0126 17:00:10.216614 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:00:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:00:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:00:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:00:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:00:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:00:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:00:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T17:00:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17523591-a778-4a97-aeab-8a7a93101850\\\",\\\"systemUUID\\\":\\\"ca45d056-99cb-4442-8a44-7e899628ecb2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:10Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:10 crc kubenswrapper[4856]: E0126 17:00:10.216765 4856 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.218254 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.218291 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.218302 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.218321 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.218332 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:10Z","lastTransitionTime":"2026-01-26T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.321684 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.321739 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.321752 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.321772 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.321790 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:10Z","lastTransitionTime":"2026-01-26T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.394803 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.394984 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.395005 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 17:00:10 crc kubenswrapper[4856]: E0126 17:00:10.395126 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:00:10 crc kubenswrapper[4856]: E0126 17:00:10.395302 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.395331 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:00:10 crc kubenswrapper[4856]: E0126 17:00:10.395508 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 17:00:10 crc kubenswrapper[4856]: E0126 17:00:10.395627 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.424279 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.424319 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.424330 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.424348 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.424360 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:10Z","lastTransitionTime":"2026-01-26T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.528630 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.528709 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.528729 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.528755 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.528786 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:10Z","lastTransitionTime":"2026-01-26T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.553630 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 02:57:46.617172638 +0000 UTC Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.631860 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.631902 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.631917 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.631942 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.631957 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:10Z","lastTransitionTime":"2026-01-26T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.734453 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.734560 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.734583 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.734800 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.734816 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:10Z","lastTransitionTime":"2026-01-26T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.837874 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.837938 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.837962 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.837994 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.838017 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:10Z","lastTransitionTime":"2026-01-26T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.941049 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.941125 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.941143 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.941166 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:10 crc kubenswrapper[4856]: I0126 17:00:10.941184 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:10Z","lastTransitionTime":"2026-01-26T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.044674 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.044755 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.044824 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.044849 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.044871 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:11Z","lastTransitionTime":"2026-01-26T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.146895 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.146942 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.146955 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.146972 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.146984 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:11Z","lastTransitionTime":"2026-01-26T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.249984 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.250053 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.250069 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.250095 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.250113 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:11Z","lastTransitionTime":"2026-01-26T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.353688 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.353750 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.353768 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.353791 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.353809 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:11Z","lastTransitionTime":"2026-01-26T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.456375 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.456457 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.456480 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.456689 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.456710 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:11Z","lastTransitionTime":"2026-01-26T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.554296 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 22:45:43.688075778 +0000 UTC Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.560263 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.560317 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.560336 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.560361 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.560436 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:11Z","lastTransitionTime":"2026-01-26T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.671203 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.671291 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.671317 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.671444 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.671473 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:11Z","lastTransitionTime":"2026-01-26T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.775059 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.775115 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.775127 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.775146 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.775162 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:11Z","lastTransitionTime":"2026-01-26T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.877924 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.877987 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.878004 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.878028 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.878047 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:11Z","lastTransitionTime":"2026-01-26T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.981951 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.982016 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.982033 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.982050 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:11 crc kubenswrapper[4856]: I0126 17:00:11.982060 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:11Z","lastTransitionTime":"2026-01-26T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.085778 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.085850 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.085864 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.085879 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.085889 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:12Z","lastTransitionTime":"2026-01-26T17:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.189218 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.189291 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.189309 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.189336 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.189354 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:12Z","lastTransitionTime":"2026-01-26T17:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.292226 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.292266 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.292274 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.292297 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.292307 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:12Z","lastTransitionTime":"2026-01-26T17:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.394441 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.394512 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.394574 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:00:12 crc kubenswrapper[4856]: E0126 17:00:12.394695 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:00:12 crc kubenswrapper[4856]: E0126 17:00:12.394891 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.394972 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:00:12 crc kubenswrapper[4856]: E0126 17:00:12.395032 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:00:12 crc kubenswrapper[4856]: E0126 17:00:12.395121 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.395727 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.395794 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.395815 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.395837 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.395854 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:12Z","lastTransitionTime":"2026-01-26T17:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.499392 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.499450 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.499471 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.499497 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.499514 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:12Z","lastTransitionTime":"2026-01-26T17:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.555217 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 18:01:13.908136797 +0000 UTC Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.602102 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.602185 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.602213 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.602248 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.602274 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:12Z","lastTransitionTime":"2026-01-26T17:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.706014 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.706075 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.706085 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.706105 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.706118 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:12Z","lastTransitionTime":"2026-01-26T17:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.811940 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.812037 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.812050 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.812087 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.812115 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:12Z","lastTransitionTime":"2026-01-26T17:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.915004 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.915054 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.915071 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.915090 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:12 crc kubenswrapper[4856]: I0126 17:00:12.915105 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:12Z","lastTransitionTime":"2026-01-26T17:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.018338 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.018394 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.018404 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.018423 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.018469 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:13Z","lastTransitionTime":"2026-01-26T17:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.121866 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.121935 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.121953 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.121980 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.121997 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:13Z","lastTransitionTime":"2026-01-26T17:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.225234 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.225288 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.225301 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.225371 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.225387 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:13Z","lastTransitionTime":"2026-01-26T17:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.328219 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.328312 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.328327 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.328347 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.328359 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:13Z","lastTransitionTime":"2026-01-26T17:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.397143 4856 scope.go:117] "RemoveContainer" containerID="203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6" Jan 26 17:00:13 crc kubenswrapper[4856]: E0126 17:00:13.397668 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pxh94_openshift-ovn-kubernetes(ab5b6f50-172b-4535-a0f9-5d103bcab4e7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.422757 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191ad5d41024c88c1a4fbc30f307dd6340fd55b93b00d22f62209d0e82be286f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf2a4d8be409a46ffe5702797e56a80023a72a19fb7dc5c49e5f4984cedc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.431391 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.431472 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.431496 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.431560 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.431586 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:13Z","lastTransitionTime":"2026-01-26T17:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.445153 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c823748751e9938f10fab08e33b5fffff5a6d15961ce028204131b7b69a56c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.468896 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.488003 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t4fq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d21ac89-2ebd-49c3-9fe0-6c3f352d2257\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://627cd6b39fdedd967fffcfc0755439277b2d73016fba1ddc7f5b9d5deba43b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5swq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t4fq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.510681 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rq622" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a742e7b-c420-46e3-9e96-e9c744af6124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afeb20035224feeab28a92ac77b43a24e653e49c56a25590a9861019a2b7a8ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:59:45Z\\\",\\\"message\\\":\\\"2026-01-26T16:59:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_41ab0694-d9c8-49a7-bf30-57e732ac7550\\\\n2026-01-26T16:59:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_41ab0694-d9c8-49a7-bf30-57e732ac7550 to /host/opt/cni/bin/\\\\n2026-01-26T16:59:00Z [verbose] multus-daemon started\\\\n2026-01-26T16:59:00Z [verbose] Readiness Indicator file check\\\\n2026-01-26T16:59:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8plh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rq622\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.530016 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59ecd87a-c5db-446d-ad3e-cfabbd648c1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89975b8f9428f81ab5d3fb48ced5dd9c837bea2feea3b89f5f7ff8d7d5d15b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40d432159a07ba20bf95f058bf8a597d67cbd8d852519bed035694f8ba3d8ec4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59ba0f131eee14df389391e46bc772894e49e976c3663df5fb3bc98b3cb4a3d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03a01b651d9f66da4dd1f6e0d29ad97c0d6ae46b644c3d997d8dc99476706df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f07438e20bdf71c752bb661084c835341999c561bb5442d75e177223881276f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T16:58:51Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 16:58:43.431198 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 16:58:43.432227 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-99450334/tls.crt::/tmp/serving-cert-99450334/tls.key\\\\\\\"\\\\nI0126 16:58:50.828320 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 16:58:50.830449 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 16:58:50.830472 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 16:58:50.830509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 16:58:50.830515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 16:58:50.834885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 16:58:50.834958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 16:58:50.834991 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 16:58:50.834905 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 16:58:50.835015 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 16:58:50.835084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 16:58:50.835106 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 16:58:50.835116 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 16:58:50.837641 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2daa3f13d37b7ae19dfca406b6b5cfdcd4f211287f7d410193f2fee36a24553\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.534388 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.534475 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.534520 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.534602 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.534662 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:13Z","lastTransitionTime":"2026-01-26T17:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.552420 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab14b0d1-ba6c-4e70-bc80-f4364577742a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a00494ca589263eb0f50c879c0aa1e1c263f74e302325f88eee31b220ebf53b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb3c5348b8b83991cbb42255dc07d74fe50e200793efe1a7b2b2727a5c2be800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3027fabe8a104141386b9767218f38a143318580dd2a33448fed2c05688ba1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a03e2fad94ce4122f1d77ce30dc80bb78298396649c12b885c386e5f8eea50b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.556460 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 12:56:45.815776686 +0000 UTC Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.579321 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T16:59:58Z\\\",\\\"message\\\":\\\" server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 16:59:57.729092 6886 port_cache.go:96] port-cache(openshift-network-diagnostics_network-check-target-xd92c): added port \\\\u0026{name:openshift-network-diagnostics_network-check-target-xd92c uuid:61897e97-c771-4738-8709-09636387cb00 logicalSwitch:crc ips:[0xc008f54210] mac:[10 88 10 217 0 4] expires:{wall:0 ext:0 loc:\\\\u003cnil\\\\u003e}} with IP: [10.217.0.4/23] and MAC: 0a:58:0a:d9:00:04\\\\nI0126 16:59:57.729124 6886 pods.go:252] [openshift-network-diagnostics/network-check-target-xd92c] addLogicalPort took 1.706822ms, libovsdb time 885.597µs\\\\nI0126 16:59:57.729132 6886 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-diagnostics/network-check-target-xd92c after 0 failed attempt(s)\\\\nF0126 16:59:57.729134 6886 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pxh94_openshift-ovn-kubernetes(ab5b6f50-172b-4535-a0f9-5d103bcab4e7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kdbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pxh94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.597360 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-295wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e50462-28e6-4531-ada4-e652310e6cce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf98h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-295wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.611070 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.631257 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad7b59f9-beb7-49d6-a2d1-e29133e46854\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc53a6a6d4222ee5e1ac29fd5957d8cdf3fd42de72c68c85329374ce7afc4004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e19d1ef39aec00337edf433f38ed92c785d6dfeb404363670b4740889fe00363\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0aff003d65fa0a8f2caa68f9024240a3cd07f45721145057419ae07d30b196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79548fe5ee0a5b0251f7fa0304f57efb25ce63c54e5bb06a0da69e483461d993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0c0b092f1edef091b4888c2231fd7ffa210e7233355b50d43cba49ffcb897d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62b76bf7fd89103159957c0550c2cf0b2dd3dba63dc4d74c6f8acc845ba1edea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://249b3a1717ed5f1cca8779359055f1d0e36eb8dbb8c5afba0b91d36ebd571da8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm9x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v2l7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.637130 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.637213 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.637224 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.637243 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.637257 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:13Z","lastTransitionTime":"2026-01-26T17:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.648431 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63c75ede-5170-4db0-811b-5217ef8d72b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da26ddcfc6ccde3c9aabd63bdef3435f9a9eaab8c095bfeef5670c1295576cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96lw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xm9cq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.669437 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc864b0d-83bc-4954-9c61-ad650157caff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbde934a6c8acad10ca3ab8206d0ddbd4f7b17e9d304b898a68f4d3b0303bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b48f763d4aff37169399be766d5ab4f7ebbf91f304d139c9022a8556946eb107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0094e662f53c4832a984e05a880021af05ffc4c27f25394c28a070d9ef5490d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb95f623fa4f5f42217649ccce4225f9fe588bb3558da9334eef2b40ddb62486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb95f623fa4f5f42217649ccce4225f9fe588bb3558da9334eef2b40ddb62486\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.691738 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c3b0574-b4cc-483d-ae88-6517d1f30772\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec9063a7c03990fc26fc47427f164a769fd649c2bdbd9d23ea7f646e569734be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67797295a8c3952902b2696c6fdb26b72ce1826b5ccd522a24aac90a0411b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67797295a8c3952902b2696c6fdb26b72ce1826b5ccd522a24aac90a0411b5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.710387 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c09748c0963bdb50c9552f249d1135ea53e68c52babd788dad050951cb849cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.727691 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.739920 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.739952 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.739966 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.739986 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.740000 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:13Z","lastTransitionTime":"2026-01-26T17:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.743385 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77e85f9-b566-4807-bb92-55963c97b93c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba87c9fc35c230bbee201a5176cb467309f0b9aee82dfc81f3b677a15486d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03dc794e9c2035f2e1983eacad3e51d76223cb1b82e2f402c73f9453e4bd2f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4n9h7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-v7579\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.770666 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"280cd8ed-5dcf-487a-8b00-22204e94d54f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c387234ad8d7123da333d3de4a80f3a79c25dddf0c3a0fb004b521161ff105b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d65b21fc101230cb18ee921fc481e83c944dde8fe01074931b90551e082ee249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff371891a210c6f3498b0d8377c477749a9ea438aa74f1e33f8ac9047df447ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c687b137e2bdfb70b19588ae8f5c65a23c2df57716cfd6918856236f2d6610a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c814433ba35046d47c29524f19b728793436e9f6967a6ea7249e35f673f48a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf39fbfd0b23f9b34e42610ae3603d849bcf4211f53ba47cbbebdaf47a9687d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf39fbfd0b23f9b34e42610ae3603d849bcf4211f53ba47cbbebdaf47a9687d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ca7ee60b82663fdc02dc2dd3f7af379df8407800d04c57f4f4d09d49ed9aa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ca7ee60b82663fdc02dc2dd3f7af379df8407800d04c57f4f4d09d49ed9aa0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ee2a878cbd2cdef8fe8d9bb62a4554ffb8aeadfb90ab92b4ff6ec965824ec37a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a878cbd2cdef8fe8d9bb62a4554ffb8aeadfb90ab92b4ff6ec965824ec37a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T16:58:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T16:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.787653 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tp5hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f28414c-12c1-4adb-be7b-6182310828eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0af4930ff9cebd62106545fb3da2dfc93f7b591426fe4c85aa2e637b60c935f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T16:58:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzc59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T16:58:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tp5hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T17:00:13Z is after 2025-08-24T17:21:41Z" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.843374 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.843441 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.843465 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.843492 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.843508 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:13Z","lastTransitionTime":"2026-01-26T17:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.946393 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.946474 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.946495 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.946577 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:13 crc kubenswrapper[4856]: I0126 17:00:13.946607 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:13Z","lastTransitionTime":"2026-01-26T17:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.050309 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.050370 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.050389 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.050416 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.050435 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:14Z","lastTransitionTime":"2026-01-26T17:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.154041 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.154122 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.154140 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.154166 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.154184 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:14Z","lastTransitionTime":"2026-01-26T17:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.258026 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.258078 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.258096 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.258120 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.258139 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:14Z","lastTransitionTime":"2026-01-26T17:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.360139 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.360188 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.360206 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.360230 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.360248 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:14Z","lastTransitionTime":"2026-01-26T17:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.378940 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12e50462-28e6-4531-ada4-e652310e6cce-metrics-certs\") pod \"network-metrics-daemon-295wr\" (UID: \"12e50462-28e6-4531-ada4-e652310e6cce\") " pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 17:00:14 crc kubenswrapper[4856]: E0126 17:00:14.379207 4856 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 17:00:14 crc kubenswrapper[4856]: E0126 17:00:14.379382 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12e50462-28e6-4531-ada4-e652310e6cce-metrics-certs podName:12e50462-28e6-4531-ada4-e652310e6cce nodeName:}" failed. No retries permitted until 2026-01-26 17:01:18.37933313 +0000 UTC m=+174.332587121 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/12e50462-28e6-4531-ada4-e652310e6cce-metrics-certs") pod "network-metrics-daemon-295wr" (UID: "12e50462-28e6-4531-ada4-e652310e6cce") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.394935 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.394988 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.395058 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.395089 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:00:14 crc kubenswrapper[4856]: E0126 17:00:14.395379 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 17:00:14 crc kubenswrapper[4856]: E0126 17:00:14.395593 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:00:14 crc kubenswrapper[4856]: E0126 17:00:14.395797 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:00:14 crc kubenswrapper[4856]: E0126 17:00:14.395924 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.462199 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.462236 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.462247 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.462262 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.462273 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:14Z","lastTransitionTime":"2026-01-26T17:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.556576 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 08:38:11.859416206 +0000 UTC Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.564496 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.564558 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.564571 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.564585 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.564596 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:14Z","lastTransitionTime":"2026-01-26T17:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.791270 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.791304 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.791312 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.791328 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.791340 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:14Z","lastTransitionTime":"2026-01-26T17:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.893462 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.893510 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.893522 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.893563 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.893575 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:14Z","lastTransitionTime":"2026-01-26T17:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.997413 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.997496 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.997554 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.997592 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:14 crc kubenswrapper[4856]: I0126 17:00:14.997619 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:14Z","lastTransitionTime":"2026-01-26T17:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.100460 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.100821 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.100996 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.101149 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.101275 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:15Z","lastTransitionTime":"2026-01-26T17:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.203905 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.204189 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.204257 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.204323 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.204379 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:15Z","lastTransitionTime":"2026-01-26T17:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.307481 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.307555 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.307564 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.307582 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.307591 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:15Z","lastTransitionTime":"2026-01-26T17:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.411958 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.412010 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.412022 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.412039 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.412054 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:15Z","lastTransitionTime":"2026-01-26T17:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.568400 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 14:43:38.756110219 +0000 UTC Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.571088 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.571154 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.571165 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.571196 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.571209 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:15Z","lastTransitionTime":"2026-01-26T17:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.597386 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-t4fq2" podStartSLOduration=82.597346293 podStartE2EDuration="1m22.597346293s" podCreationTimestamp="2026-01-26 16:58:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:00:15.582847484 +0000 UTC m=+111.536101465" watchObservedRunningTime="2026-01-26 17:00:15.597346293 +0000 UTC m=+111.550600284" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.597899 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-rq622" podStartSLOduration=81.59789124 podStartE2EDuration="1m21.59789124s" podCreationTimestamp="2026-01-26 16:58:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:00:15.597551549 +0000 UTC m=+111.550805540" watchObservedRunningTime="2026-01-26 17:00:15.59789124 +0000 UTC m=+111.551145221" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.622576 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=67.622560916 podStartE2EDuration="1m7.622560916s" podCreationTimestamp="2026-01-26 16:59:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:00:15.622380181 +0000 UTC m=+111.575634182" watchObservedRunningTime="2026-01-26 17:00:15.622560916 +0000 UTC m=+111.575814897" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.642950 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=7.642924933 podStartE2EDuration="7.642924933s" podCreationTimestamp="2026-01-26 17:00:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:00:15.642045726 +0000 UTC m=+111.595299707" watchObservedRunningTime="2026-01-26 17:00:15.642924933 +0000 UTC m=+111.596178924" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.674183 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.674224 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.674237 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.674252 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.674264 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:15Z","lastTransitionTime":"2026-01-26T17:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.720910 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-v2l7v" podStartSLOduration=81.720889012 podStartE2EDuration="1m21.720889012s" podCreationTimestamp="2026-01-26 16:58:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:00:15.720592533 +0000 UTC m=+111.673846524" watchObservedRunningTime="2026-01-26 17:00:15.720889012 +0000 UTC m=+111.674143003" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.752067 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podStartSLOduration=81.752051285 podStartE2EDuration="1m21.752051285s" podCreationTimestamp="2026-01-26 16:58:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:00:15.735637638 +0000 UTC m=+111.688891629" watchObservedRunningTime="2026-01-26 17:00:15.752051285 +0000 UTC m=+111.705305266" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.752243 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=62.752239391 podStartE2EDuration="1m2.752239391s" podCreationTimestamp="2026-01-26 16:59:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:00:15.751911751 +0000 UTC m=+111.705165742" watchObservedRunningTime="2026-01-26 17:00:15.752239391 +0000 UTC m=+111.705493372" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.775643 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=30.775623948 podStartE2EDuration="30.775623948s" podCreationTimestamp="2026-01-26 16:59:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:00:15.762891513 +0000 UTC m=+111.716145494" watchObservedRunningTime="2026-01-26 17:00:15.775623948 +0000 UTC m=+111.728877929" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.776310 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.776370 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.776380 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.776395 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.776405 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:15Z","lastTransitionTime":"2026-01-26T17:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.811769 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-v7579" podStartSLOduration=80.811752622 podStartE2EDuration="1m20.811752622s" podCreationTimestamp="2026-01-26 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:00:15.811435272 +0000 UTC m=+111.764689273" watchObservedRunningTime="2026-01-26 17:00:15.811752622 +0000 UTC m=+111.765006603" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.839850 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=8.839828292 podStartE2EDuration="8.839828292s" podCreationTimestamp="2026-01-26 17:00:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:00:15.837423199 +0000 UTC m=+111.790677190" watchObservedRunningTime="2026-01-26 17:00:15.839828292 +0000 UTC m=+111.793082283" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.853243 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-tp5hk" podStartSLOduration=82.853216757 podStartE2EDuration="1m22.853216757s" podCreationTimestamp="2026-01-26 16:58:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:00:15.852262528 +0000 UTC m=+111.805516529" watchObservedRunningTime="2026-01-26 17:00:15.853216757 +0000 UTC m=+111.806470758" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.879465 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.879507 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.879519 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.879545 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.879554 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:15Z","lastTransitionTime":"2026-01-26T17:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.981682 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.981717 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.981726 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.981741 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:15 crc kubenswrapper[4856]: I0126 17:00:15.981769 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:15Z","lastTransitionTime":"2026-01-26T17:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.084473 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.084543 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.084555 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.084575 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.084588 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:16Z","lastTransitionTime":"2026-01-26T17:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.187061 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.187087 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.187095 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.187107 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.187115 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:16Z","lastTransitionTime":"2026-01-26T17:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.289744 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.289802 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.289824 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.289849 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.289866 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:16Z","lastTransitionTime":"2026-01-26T17:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.393722 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.393777 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.393792 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.393819 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.393832 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:16Z","lastTransitionTime":"2026-01-26T17:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.394179 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.394230 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.394269 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.394344 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 17:00:16 crc kubenswrapper[4856]: E0126 17:00:16.394386 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:00:16 crc kubenswrapper[4856]: E0126 17:00:16.394450 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 17:00:16 crc kubenswrapper[4856]: E0126 17:00:16.394591 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:00:16 crc kubenswrapper[4856]: E0126 17:00:16.394793 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.496373 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.496407 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.496417 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.496432 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.496442 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:16Z","lastTransitionTime":"2026-01-26T17:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.587314 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 15:27:07.353976077 +0000 UTC Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.599773 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.599818 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.599830 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.599852 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.599864 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:16Z","lastTransitionTime":"2026-01-26T17:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.704133 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.704207 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.704234 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.704287 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.704311 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:16Z","lastTransitionTime":"2026-01-26T17:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.807487 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.807828 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.807840 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.807857 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.807869 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:16Z","lastTransitionTime":"2026-01-26T17:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.910367 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.910423 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.910443 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.910468 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:16 crc kubenswrapper[4856]: I0126 17:00:16.910486 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:16Z","lastTransitionTime":"2026-01-26T17:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.012496 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.012582 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.012597 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.012622 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.012634 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:17Z","lastTransitionTime":"2026-01-26T17:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.116219 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.116259 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.116272 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.116290 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.116302 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:17Z","lastTransitionTime":"2026-01-26T17:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.218659 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.218701 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.218714 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.218754 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.218770 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:17Z","lastTransitionTime":"2026-01-26T17:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.428243 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.428270 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.428278 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.428289 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.428299 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:17Z","lastTransitionTime":"2026-01-26T17:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.530737 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.530779 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.530791 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.530808 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.530820 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:17Z","lastTransitionTime":"2026-01-26T17:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.588230 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 18:28:46.994851214 +0000 UTC Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.633223 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.633281 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.633295 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.633316 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.633348 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:17Z","lastTransitionTime":"2026-01-26T17:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.736147 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.736213 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.736228 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.736272 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.736287 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:17Z","lastTransitionTime":"2026-01-26T17:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.838881 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.838931 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.838943 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.838962 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.838973 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:17Z","lastTransitionTime":"2026-01-26T17:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.941431 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.941500 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.941518 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.941598 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:17 crc kubenswrapper[4856]: I0126 17:00:17.941621 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:17Z","lastTransitionTime":"2026-01-26T17:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.044545 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.044592 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.044606 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.044623 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.044638 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:18Z","lastTransitionTime":"2026-01-26T17:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.147935 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.147998 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.148017 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.148041 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.148059 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:18Z","lastTransitionTime":"2026-01-26T17:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.253889 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.253943 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.253960 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.253984 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.253998 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:18Z","lastTransitionTime":"2026-01-26T17:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.356691 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.356732 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.356742 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.356757 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.356768 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:18Z","lastTransitionTime":"2026-01-26T17:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.395107 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.395170 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.395270 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.395346 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 17:00:18 crc kubenswrapper[4856]: E0126 17:00:18.395673 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:00:18 crc kubenswrapper[4856]: E0126 17:00:18.395857 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:00:18 crc kubenswrapper[4856]: E0126 17:00:18.396009 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:00:18 crc kubenswrapper[4856]: E0126 17:00:18.396135 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.459839 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.459895 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.459913 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.459934 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.459947 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:18Z","lastTransitionTime":"2026-01-26T17:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.563076 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.563112 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.563125 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.563176 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.563188 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:18Z","lastTransitionTime":"2026-01-26T17:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.588589 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 22:36:58.335702397 +0000 UTC Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.666269 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.666309 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.666321 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.666337 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.666346 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:18Z","lastTransitionTime":"2026-01-26T17:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.776869 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.776940 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.776955 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.776974 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.776989 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:18Z","lastTransitionTime":"2026-01-26T17:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.880464 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.880502 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.880511 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.880546 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.880558 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:18Z","lastTransitionTime":"2026-01-26T17:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.984139 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.984204 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.984221 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.984246 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:18 crc kubenswrapper[4856]: I0126 17:00:18.984266 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:18Z","lastTransitionTime":"2026-01-26T17:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:19 crc kubenswrapper[4856]: I0126 17:00:19.087342 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:19 crc kubenswrapper[4856]: I0126 17:00:19.087386 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:19 crc kubenswrapper[4856]: I0126 17:00:19.087395 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:19 crc kubenswrapper[4856]: I0126 17:00:19.087411 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:19 crc kubenswrapper[4856]: I0126 17:00:19.087422 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:19Z","lastTransitionTime":"2026-01-26T17:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:19 crc kubenswrapper[4856]: I0126 17:00:19.189991 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:19 crc kubenswrapper[4856]: I0126 17:00:19.190035 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:19 crc kubenswrapper[4856]: I0126 17:00:19.190044 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:19 crc kubenswrapper[4856]: I0126 17:00:19.190061 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:19 crc kubenswrapper[4856]: I0126 17:00:19.190070 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:19Z","lastTransitionTime":"2026-01-26T17:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:19 crc kubenswrapper[4856]: I0126 17:00:19.293634 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:19 crc kubenswrapper[4856]: I0126 17:00:19.293672 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:19 crc kubenswrapper[4856]: I0126 17:00:19.293683 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:19 crc kubenswrapper[4856]: I0126 17:00:19.293713 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:19 crc kubenswrapper[4856]: I0126 17:00:19.293724 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:19Z","lastTransitionTime":"2026-01-26T17:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:19 crc kubenswrapper[4856]: I0126 17:00:19.485208 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:19 crc kubenswrapper[4856]: I0126 17:00:19.485244 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:19 crc kubenswrapper[4856]: I0126 17:00:19.485255 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:19 crc kubenswrapper[4856]: I0126 17:00:19.485273 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:19 crc kubenswrapper[4856]: I0126 17:00:19.485285 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:19Z","lastTransitionTime":"2026-01-26T17:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:19 crc kubenswrapper[4856]: I0126 17:00:19.588384 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:19 crc kubenswrapper[4856]: I0126 17:00:19.588450 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:19 crc kubenswrapper[4856]: I0126 17:00:19.588461 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:19 crc kubenswrapper[4856]: I0126 17:00:19.588479 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:19 crc kubenswrapper[4856]: I0126 17:00:19.588490 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:19Z","lastTransitionTime":"2026-01-26T17:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:19 crc kubenswrapper[4856]: I0126 17:00:19.588701 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 08:23:55.898464011 +0000 UTC Jan 26 17:00:19 crc kubenswrapper[4856]: I0126 17:00:19.697128 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:19 crc kubenswrapper[4856]: I0126 17:00:19.697179 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:19 crc kubenswrapper[4856]: I0126 17:00:19.697193 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:19 crc kubenswrapper[4856]: I0126 17:00:19.697215 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:19 crc kubenswrapper[4856]: I0126 17:00:19.697231 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:19Z","lastTransitionTime":"2026-01-26T17:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:19 crc kubenswrapper[4856]: I0126 17:00:19.799263 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:19 crc kubenswrapper[4856]: I0126 17:00:19.799319 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:19 crc kubenswrapper[4856]: I0126 17:00:19.799328 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:19 crc kubenswrapper[4856]: I0126 17:00:19.799340 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:19 crc kubenswrapper[4856]: I0126 17:00:19.799349 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:19Z","lastTransitionTime":"2026-01-26T17:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:19 crc kubenswrapper[4856]: I0126 17:00:19.901637 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:19 crc kubenswrapper[4856]: I0126 17:00:19.901666 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:19 crc kubenswrapper[4856]: I0126 17:00:19.901676 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:19 crc kubenswrapper[4856]: I0126 17:00:19.901692 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:19 crc kubenswrapper[4856]: I0126 17:00:19.901702 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:19Z","lastTransitionTime":"2026-01-26T17:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.003840 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.003880 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.003892 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.003909 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.003921 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:20Z","lastTransitionTime":"2026-01-26T17:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.106968 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.107016 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.107027 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.107043 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.107055 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:20Z","lastTransitionTime":"2026-01-26T17:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.209869 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.209933 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.209947 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.209971 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.209988 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:20Z","lastTransitionTime":"2026-01-26T17:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.312466 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.312508 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.312517 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.312556 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.312565 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:20Z","lastTransitionTime":"2026-01-26T17:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.394742 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.394848 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:00:20 crc kubenswrapper[4856]: E0126 17:00:20.394910 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.394978 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.395002 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 17:00:20 crc kubenswrapper[4856]: E0126 17:00:20.395203 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:00:20 crc kubenswrapper[4856]: E0126 17:00:20.395315 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 17:00:20 crc kubenswrapper[4856]: E0126 17:00:20.395411 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.416274 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.416339 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.416355 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.416378 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.416393 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:20Z","lastTransitionTime":"2026-01-26T17:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.519913 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.519982 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.519997 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.520018 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.520030 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:20Z","lastTransitionTime":"2026-01-26T17:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.579580 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.579815 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.579898 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.579947 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.579976 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T17:00:20Z","lastTransitionTime":"2026-01-26T17:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.589762 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 10:45:44.864025374 +0000 UTC Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.658955 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-fdsb9"] Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.659716 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fdsb9" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.662516 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.662641 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.663557 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.665290 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.695394 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e7ec14e-9348-48ea-ae7c-5cf3974b7a55-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-fdsb9\" (UID: \"7e7ec14e-9348-48ea-ae7c-5cf3974b7a55\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fdsb9" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.695431 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7e7ec14e-9348-48ea-ae7c-5cf3974b7a55-service-ca\") pod \"cluster-version-operator-5c965bbfc6-fdsb9\" (UID: \"7e7ec14e-9348-48ea-ae7c-5cf3974b7a55\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fdsb9" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.695478 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e7ec14e-9348-48ea-ae7c-5cf3974b7a55-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-fdsb9\" (UID: \"7e7ec14e-9348-48ea-ae7c-5cf3974b7a55\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fdsb9" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.695514 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7e7ec14e-9348-48ea-ae7c-5cf3974b7a55-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-fdsb9\" (UID: \"7e7ec14e-9348-48ea-ae7c-5cf3974b7a55\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fdsb9" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.695574 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7e7ec14e-9348-48ea-ae7c-5cf3974b7a55-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-fdsb9\" (UID: \"7e7ec14e-9348-48ea-ae7c-5cf3974b7a55\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fdsb9" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.796568 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e7ec14e-9348-48ea-ae7c-5cf3974b7a55-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-fdsb9\" (UID: \"7e7ec14e-9348-48ea-ae7c-5cf3974b7a55\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fdsb9" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.796668 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7e7ec14e-9348-48ea-ae7c-5cf3974b7a55-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-fdsb9\" (UID: \"7e7ec14e-9348-48ea-ae7c-5cf3974b7a55\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fdsb9" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.796729 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7e7ec14e-9348-48ea-ae7c-5cf3974b7a55-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-fdsb9\" (UID: \"7e7ec14e-9348-48ea-ae7c-5cf3974b7a55\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fdsb9" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.796820 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e7ec14e-9348-48ea-ae7c-5cf3974b7a55-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-fdsb9\" (UID: \"7e7ec14e-9348-48ea-ae7c-5cf3974b7a55\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fdsb9" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.796859 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7e7ec14e-9348-48ea-ae7c-5cf3974b7a55-service-ca\") pod \"cluster-version-operator-5c965bbfc6-fdsb9\" (UID: \"7e7ec14e-9348-48ea-ae7c-5cf3974b7a55\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fdsb9" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.796904 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7e7ec14e-9348-48ea-ae7c-5cf3974b7a55-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-fdsb9\" (UID: \"7e7ec14e-9348-48ea-ae7c-5cf3974b7a55\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fdsb9" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.796971 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7e7ec14e-9348-48ea-ae7c-5cf3974b7a55-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-fdsb9\" (UID: \"7e7ec14e-9348-48ea-ae7c-5cf3974b7a55\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fdsb9" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.798136 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7e7ec14e-9348-48ea-ae7c-5cf3974b7a55-service-ca\") pod \"cluster-version-operator-5c965bbfc6-fdsb9\" (UID: \"7e7ec14e-9348-48ea-ae7c-5cf3974b7a55\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fdsb9" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.804212 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e7ec14e-9348-48ea-ae7c-5cf3974b7a55-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-fdsb9\" (UID: \"7e7ec14e-9348-48ea-ae7c-5cf3974b7a55\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fdsb9" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.828738 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e7ec14e-9348-48ea-ae7c-5cf3974b7a55-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-fdsb9\" (UID: \"7e7ec14e-9348-48ea-ae7c-5cf3974b7a55\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fdsb9" Jan 26 17:00:20 crc kubenswrapper[4856]: I0126 17:00:20.976275 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fdsb9" Jan 26 17:00:21 crc kubenswrapper[4856]: W0126 17:00:21.005387 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e7ec14e_9348_48ea_ae7c_5cf3974b7a55.slice/crio-79b8671c2619e7cea07cffb5d2d7bd4c1ea0c3444ee981556807083a2163c60a WatchSource:0}: Error finding container 79b8671c2619e7cea07cffb5d2d7bd4c1ea0c3444ee981556807083a2163c60a: Status 404 returned error can't find the container with id 79b8671c2619e7cea07cffb5d2d7bd4c1ea0c3444ee981556807083a2163c60a Jan 26 17:00:21 crc kubenswrapper[4856]: I0126 17:00:21.590618 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 18:01:47.884230807 +0000 UTC Jan 26 17:00:21 crc kubenswrapper[4856]: I0126 17:00:21.590916 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 26 17:00:21 crc kubenswrapper[4856]: I0126 17:00:21.605120 4856 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 26 17:00:21 crc kubenswrapper[4856]: I0126 17:00:21.963192 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fdsb9" event={"ID":"7e7ec14e-9348-48ea-ae7c-5cf3974b7a55","Type":"ContainerStarted","Data":"64170a572cc954a40ed5bf4f9d7a0fff4dd4a74e7c0e71b32b75b571217d368a"} Jan 26 17:00:21 crc kubenswrapper[4856]: I0126 17:00:21.963279 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fdsb9" event={"ID":"7e7ec14e-9348-48ea-ae7c-5cf3974b7a55","Type":"ContainerStarted","Data":"79b8671c2619e7cea07cffb5d2d7bd4c1ea0c3444ee981556807083a2163c60a"} Jan 26 17:00:22 crc kubenswrapper[4856]: I0126 17:00:22.394392 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:00:22 crc kubenswrapper[4856]: I0126 17:00:22.394392 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:00:22 crc kubenswrapper[4856]: I0126 17:00:22.394576 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:00:22 crc kubenswrapper[4856]: I0126 17:00:22.394892 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 17:00:22 crc kubenswrapper[4856]: E0126 17:00:22.395089 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:00:22 crc kubenswrapper[4856]: E0126 17:00:22.395203 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:00:22 crc kubenswrapper[4856]: E0126 17:00:22.395289 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:00:22 crc kubenswrapper[4856]: E0126 17:00:22.395452 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 17:00:24 crc kubenswrapper[4856]: I0126 17:00:24.394224 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 17:00:24 crc kubenswrapper[4856]: E0126 17:00:24.394349 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 17:00:24 crc kubenswrapper[4856]: I0126 17:00:24.394687 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:00:24 crc kubenswrapper[4856]: I0126 17:00:24.394718 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:00:24 crc kubenswrapper[4856]: E0126 17:00:24.394766 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:00:24 crc kubenswrapper[4856]: I0126 17:00:24.394791 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:00:24 crc kubenswrapper[4856]: E0126 17:00:24.394961 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:00:24 crc kubenswrapper[4856]: I0126 17:00:24.395107 4856 scope.go:117] "RemoveContainer" containerID="203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6" Jan 26 17:00:24 crc kubenswrapper[4856]: E0126 17:00:24.395249 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pxh94_openshift-ovn-kubernetes(ab5b6f50-172b-4535-a0f9-5d103bcab4e7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" Jan 26 17:00:24 crc kubenswrapper[4856]: E0126 17:00:24.395300 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:00:25 crc kubenswrapper[4856]: E0126 17:00:25.322644 4856 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 26 17:00:25 crc kubenswrapper[4856]: E0126 17:00:25.732538 4856 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 17:00:26 crc kubenswrapper[4856]: I0126 17:00:26.394913 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 17:00:26 crc kubenswrapper[4856]: I0126 17:00:26.394952 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:00:26 crc kubenswrapper[4856]: I0126 17:00:26.394999 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:00:26 crc kubenswrapper[4856]: E0126 17:00:26.395262 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 17:00:26 crc kubenswrapper[4856]: E0126 17:00:26.395370 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:00:26 crc kubenswrapper[4856]: E0126 17:00:26.395412 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:00:26 crc kubenswrapper[4856]: I0126 17:00:26.395576 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:00:26 crc kubenswrapper[4856]: E0126 17:00:26.395642 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:00:28 crc kubenswrapper[4856]: I0126 17:00:28.395174 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 17:00:28 crc kubenswrapper[4856]: I0126 17:00:28.395248 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:00:28 crc kubenswrapper[4856]: I0126 17:00:28.395337 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:00:28 crc kubenswrapper[4856]: I0126 17:00:28.395393 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:00:28 crc kubenswrapper[4856]: E0126 17:00:28.395962 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 17:00:28 crc kubenswrapper[4856]: E0126 17:00:28.396405 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:00:28 crc kubenswrapper[4856]: E0126 17:00:28.396272 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:00:28 crc kubenswrapper[4856]: E0126 17:00:28.396617 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:00:30 crc kubenswrapper[4856]: I0126 17:00:30.394791 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:00:30 crc kubenswrapper[4856]: I0126 17:00:30.394824 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 17:00:30 crc kubenswrapper[4856]: I0126 17:00:30.394824 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:00:30 crc kubenswrapper[4856]: I0126 17:00:30.394865 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:00:30 crc kubenswrapper[4856]: E0126 17:00:30.395807 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 17:00:30 crc kubenswrapper[4856]: E0126 17:00:30.395955 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:00:30 crc kubenswrapper[4856]: E0126 17:00:30.396205 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:00:30 crc kubenswrapper[4856]: E0126 17:00:30.396244 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:00:30 crc kubenswrapper[4856]: E0126 17:00:30.733490 4856 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 17:00:32 crc kubenswrapper[4856]: I0126 17:00:32.395068 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:00:32 crc kubenswrapper[4856]: I0126 17:00:32.395116 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:00:32 crc kubenswrapper[4856]: I0126 17:00:32.395155 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 17:00:32 crc kubenswrapper[4856]: E0126 17:00:32.395241 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:00:32 crc kubenswrapper[4856]: I0126 17:00:32.395276 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:00:32 crc kubenswrapper[4856]: E0126 17:00:32.395503 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 17:00:32 crc kubenswrapper[4856]: E0126 17:00:32.395538 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:00:32 crc kubenswrapper[4856]: E0126 17:00:32.395623 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:00:33 crc kubenswrapper[4856]: I0126 17:00:33.003342 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rq622_7a742e7b-c420-46e3-9e96-e9c744af6124/kube-multus/1.log" Jan 26 17:00:33 crc kubenswrapper[4856]: I0126 17:00:33.004282 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rq622_7a742e7b-c420-46e3-9e96-e9c744af6124/kube-multus/0.log" Jan 26 17:00:33 crc kubenswrapper[4856]: I0126 17:00:33.004356 4856 generic.go:334] "Generic (PLEG): container finished" podID="7a742e7b-c420-46e3-9e96-e9c744af6124" containerID="afeb20035224feeab28a92ac77b43a24e653e49c56a25590a9861019a2b7a8ff" exitCode=1 Jan 26 17:00:33 crc kubenswrapper[4856]: I0126 17:00:33.004398 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rq622" event={"ID":"7a742e7b-c420-46e3-9e96-e9c744af6124","Type":"ContainerDied","Data":"afeb20035224feeab28a92ac77b43a24e653e49c56a25590a9861019a2b7a8ff"} Jan 26 17:00:33 crc kubenswrapper[4856]: I0126 17:00:33.004452 4856 scope.go:117] "RemoveContainer" containerID="ad7222c9b91c0065a545bc1904d9864a5923cc13bfb6617daeb4a965a830f191" Jan 26 17:00:33 crc kubenswrapper[4856]: I0126 17:00:33.004882 4856 scope.go:117] "RemoveContainer" containerID="afeb20035224feeab28a92ac77b43a24e653e49c56a25590a9861019a2b7a8ff" Jan 26 17:00:33 crc kubenswrapper[4856]: E0126 17:00:33.005055 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-rq622_openshift-multus(7a742e7b-c420-46e3-9e96-e9c744af6124)\"" pod="openshift-multus/multus-rq622" podUID="7a742e7b-c420-46e3-9e96-e9c744af6124" Jan 26 17:00:33 crc kubenswrapper[4856]: I0126 17:00:33.026639 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fdsb9" podStartSLOduration=99.026617703 podStartE2EDuration="1m39.026617703s" podCreationTimestamp="2026-01-26 16:58:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:00:21.989087162 +0000 UTC m=+117.942341163" watchObservedRunningTime="2026-01-26 17:00:33.026617703 +0000 UTC m=+128.979871684" Jan 26 17:00:34 crc kubenswrapper[4856]: I0126 17:00:34.009026 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rq622_7a742e7b-c420-46e3-9e96-e9c744af6124/kube-multus/1.log" Jan 26 17:00:34 crc kubenswrapper[4856]: I0126 17:00:34.395274 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:00:34 crc kubenswrapper[4856]: I0126 17:00:34.395376 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 17:00:34 crc kubenswrapper[4856]: I0126 17:00:34.395386 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:00:34 crc kubenswrapper[4856]: I0126 17:00:34.395298 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:00:34 crc kubenswrapper[4856]: E0126 17:00:34.395472 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:00:34 crc kubenswrapper[4856]: E0126 17:00:34.395620 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:00:34 crc kubenswrapper[4856]: E0126 17:00:34.395729 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 17:00:34 crc kubenswrapper[4856]: E0126 17:00:34.395861 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:00:35 crc kubenswrapper[4856]: E0126 17:00:35.734853 4856 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 17:00:36 crc kubenswrapper[4856]: I0126 17:00:36.395177 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 17:00:36 crc kubenswrapper[4856]: I0126 17:00:36.395225 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:00:36 crc kubenswrapper[4856]: I0126 17:00:36.395235 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:00:36 crc kubenswrapper[4856]: E0126 17:00:36.395328 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 17:00:36 crc kubenswrapper[4856]: I0126 17:00:36.395351 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:00:36 crc kubenswrapper[4856]: E0126 17:00:36.395446 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:00:36 crc kubenswrapper[4856]: E0126 17:00:36.395630 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:00:36 crc kubenswrapper[4856]: E0126 17:00:36.395702 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:00:38 crc kubenswrapper[4856]: I0126 17:00:38.394708 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:00:38 crc kubenswrapper[4856]: I0126 17:00:38.394755 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 17:00:38 crc kubenswrapper[4856]: I0126 17:00:38.394997 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:00:38 crc kubenswrapper[4856]: I0126 17:00:38.395002 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:00:38 crc kubenswrapper[4856]: E0126 17:00:38.395138 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:00:38 crc kubenswrapper[4856]: E0126 17:00:38.395280 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 17:00:38 crc kubenswrapper[4856]: E0126 17:00:38.395434 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:00:38 crc kubenswrapper[4856]: E0126 17:00:38.395547 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:00:39 crc kubenswrapper[4856]: I0126 17:00:39.395942 4856 scope.go:117] "RemoveContainer" containerID="203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6" Jan 26 17:00:40 crc kubenswrapper[4856]: I0126 17:00:40.028761 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pxh94_ab5b6f50-172b-4535-a0f9-5d103bcab4e7/ovnkube-controller/3.log" Jan 26 17:00:40 crc kubenswrapper[4856]: I0126 17:00:40.031560 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" event={"ID":"ab5b6f50-172b-4535-a0f9-5d103bcab4e7","Type":"ContainerStarted","Data":"984517fa62d3e73cf4b33b7c4a101f8221c940f66938120373b7d41cac9c5e5c"} Jan 26 17:00:40 crc kubenswrapper[4856]: I0126 17:00:40.032077 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 17:00:40 crc kubenswrapper[4856]: I0126 17:00:40.164107 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" podStartSLOduration=106.164081989 podStartE2EDuration="1m46.164081989s" podCreationTimestamp="2026-01-26 16:58:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:00:40.065119811 +0000 UTC m=+136.018373802" watchObservedRunningTime="2026-01-26 17:00:40.164081989 +0000 UTC m=+136.117335970" Jan 26 17:00:40 crc kubenswrapper[4856]: I0126 17:00:40.165496 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-295wr"] Jan 26 17:00:40 crc kubenswrapper[4856]: I0126 17:00:40.165615 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 17:00:40 crc kubenswrapper[4856]: E0126 17:00:40.165714 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 17:00:40 crc kubenswrapper[4856]: I0126 17:00:40.394491 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:00:40 crc kubenswrapper[4856]: I0126 17:00:40.394556 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:00:40 crc kubenswrapper[4856]: E0126 17:00:40.394664 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:00:40 crc kubenswrapper[4856]: I0126 17:00:40.394467 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:00:40 crc kubenswrapper[4856]: E0126 17:00:40.394798 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:00:40 crc kubenswrapper[4856]: E0126 17:00:40.395261 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:00:40 crc kubenswrapper[4856]: E0126 17:00:40.736734 4856 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 17:00:42 crc kubenswrapper[4856]: I0126 17:00:42.394731 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:00:42 crc kubenswrapper[4856]: I0126 17:00:42.394739 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 17:00:42 crc kubenswrapper[4856]: E0126 17:00:42.394895 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:00:42 crc kubenswrapper[4856]: I0126 17:00:42.394758 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:00:42 crc kubenswrapper[4856]: I0126 17:00:42.394746 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:00:42 crc kubenswrapper[4856]: E0126 17:00:42.394985 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:00:42 crc kubenswrapper[4856]: E0126 17:00:42.395099 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 17:00:42 crc kubenswrapper[4856]: E0126 17:00:42.395222 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:00:44 crc kubenswrapper[4856]: I0126 17:00:44.394196 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:00:44 crc kubenswrapper[4856]: I0126 17:00:44.394265 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:00:44 crc kubenswrapper[4856]: I0126 17:00:44.394204 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:00:44 crc kubenswrapper[4856]: E0126 17:00:44.394363 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:00:44 crc kubenswrapper[4856]: I0126 17:00:44.394283 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 17:00:44 crc kubenswrapper[4856]: E0126 17:00:44.394462 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:00:44 crc kubenswrapper[4856]: E0126 17:00:44.394602 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:00:44 crc kubenswrapper[4856]: E0126 17:00:44.394692 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 17:00:45 crc kubenswrapper[4856]: I0126 17:00:45.395801 4856 scope.go:117] "RemoveContainer" containerID="afeb20035224feeab28a92ac77b43a24e653e49c56a25590a9861019a2b7a8ff" Jan 26 17:00:45 crc kubenswrapper[4856]: E0126 17:00:45.737235 4856 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 17:00:46 crc kubenswrapper[4856]: I0126 17:00:46.053396 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rq622_7a742e7b-c420-46e3-9e96-e9c744af6124/kube-multus/1.log" Jan 26 17:00:46 crc kubenswrapper[4856]: I0126 17:00:46.053613 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rq622" event={"ID":"7a742e7b-c420-46e3-9e96-e9c744af6124","Type":"ContainerStarted","Data":"ddec0dbea657c6160cfdfd78886d5ae335dab8b667b0e0e3813dffa86a2ae2dc"} Jan 26 17:00:46 crc kubenswrapper[4856]: I0126 17:00:46.394810 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 17:00:46 crc kubenswrapper[4856]: I0126 17:00:46.394810 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:00:46 crc kubenswrapper[4856]: I0126 17:00:46.394818 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:00:46 crc kubenswrapper[4856]: I0126 17:00:46.394830 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:00:46 crc kubenswrapper[4856]: E0126 17:00:46.395209 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 17:00:46 crc kubenswrapper[4856]: E0126 17:00:46.395363 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:00:46 crc kubenswrapper[4856]: E0126 17:00:46.395566 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:00:46 crc kubenswrapper[4856]: E0126 17:00:46.395636 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:00:48 crc kubenswrapper[4856]: I0126 17:00:48.395118 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:00:48 crc kubenswrapper[4856]: I0126 17:00:48.395126 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:00:48 crc kubenswrapper[4856]: I0126 17:00:48.395183 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:00:48 crc kubenswrapper[4856]: E0126 17:00:48.396867 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:00:48 crc kubenswrapper[4856]: E0126 17:00:48.396991 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:00:48 crc kubenswrapper[4856]: I0126 17:00:48.395265 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 17:00:48 crc kubenswrapper[4856]: E0126 17:00:48.397245 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 17:00:48 crc kubenswrapper[4856]: E0126 17:00:48.397451 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:00:50 crc kubenswrapper[4856]: I0126 17:00:50.395394 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:00:50 crc kubenswrapper[4856]: I0126 17:00:50.395465 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 17:00:50 crc kubenswrapper[4856]: E0126 17:00:50.395596 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 17:00:50 crc kubenswrapper[4856]: I0126 17:00:50.395683 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:00:50 crc kubenswrapper[4856]: E0126 17:00:50.395718 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-295wr" podUID="12e50462-28e6-4531-ada4-e652310e6cce" Jan 26 17:00:50 crc kubenswrapper[4856]: E0126 17:00:50.395753 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 17:00:50 crc kubenswrapper[4856]: I0126 17:00:50.396004 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:00:50 crc kubenswrapper[4856]: E0126 17:00:50.396171 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.233908 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.273535 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-6rlxp"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.274257 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.274257 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lndnt"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.275011 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lndnt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.278670 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6cghs"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.279258 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cghs" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.281235 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.282511 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fpqvc"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.283000 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fpqvc" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.286786 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.287337 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.287387 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.287490 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.287580 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.287816 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.287923 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.287980 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.288106 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.288278 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.288334 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.288396 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.288399 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.288442 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.288419 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.288341 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.288565 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.288579 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.288596 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.288709 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.289021 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.289351 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.290359 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.290384 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.291517 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.291910 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4w5bf"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.292110 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.292591 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4w5bf" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.292909 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-7l927"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.293438 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-7l927" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.295482 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.295936 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.295955 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5bjl7"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.296473 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cb8nk"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.296774 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5bjl7" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.298845 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.301024 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-6qgnn"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.301335 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7p5jt"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.301591 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.301724 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7p5jt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.301813 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6qgnn" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.301629 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-962cr"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.302291 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-88lkr"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.302677 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4pbj2"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.303032 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-4pbj2" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.303175 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-88lkr" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.303467 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-962cr" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.308809 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jdjcq"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.309617 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-jdjcq" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.334071 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-7xb2b"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.336802 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-7xb2b" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.338175 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.338440 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.338667 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.339084 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.339698 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.339937 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.340183 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.340393 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.340798 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-27vjc"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.343443 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.343971 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.381408 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-27vjc" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.381930 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.381975 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.383035 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.384154 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.384417 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.384541 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.384708 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.384893 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.385846 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.386796 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.386854 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ba3cf6a-a6be-4108-a155-c8bb530aa037-serving-cert\") pod \"openshift-config-operator-7777fb866f-5bjl7\" (UID: \"2ba3cf6a-a6be-4108-a155-c8bb530aa037\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5bjl7" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.386915 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.386916 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2ba3cf6a-a6be-4108-a155-c8bb530aa037-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5bjl7\" (UID: \"2ba3cf6a-a6be-4108-a155-c8bb530aa037\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5bjl7" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.386976 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tmd5\" (UniqueName: \"kubernetes.io/projected/2ba3cf6a-a6be-4108-a155-c8bb530aa037-kube-api-access-6tmd5\") pod \"openshift-config-operator-7777fb866f-5bjl7\" (UID: \"2ba3cf6a-a6be-4108-a155-c8bb530aa037\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5bjl7" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.387096 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.387185 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.387206 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.387296 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.387404 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.387441 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.387492 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.387581 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.387226 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.390143 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.392517 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.392587 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.392724 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.392873 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.393005 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.393244 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.393860 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.394046 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.394181 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.394687 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.394851 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.395071 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.395152 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.392534 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.395470 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.395637 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.398315 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.398520 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.398914 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.399100 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.399143 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.399276 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.399315 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.400017 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.401564 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.406617 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.413414 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.414470 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.414660 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.414875 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.415175 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.415323 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.416204 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.416612 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qdmxz"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.417136 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-58fcz"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.417313 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.417509 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.417545 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q7j7b"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.417682 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.417835 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.417872 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q7j7b" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.418099 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdmxz" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.418264 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-58fcz" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.437074 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.437937 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.439191 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.439792 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.440562 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.444734 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.447356 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-h9b2g"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.447709 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.447890 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.448346 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-h9b2g" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.449829 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6snv6"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.450230 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6snv6" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.456403 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.459349 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cl895"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.460042 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cl895" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.461277 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-2sfhr"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.461987 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2sfhr" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.462128 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-zzxln"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.463902 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-ddghz"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.464380 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-ddghz" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.464420 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zzxln" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.465694 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k662z"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.466487 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k662z" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.466502 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nn46h"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.467185 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nn46h" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.468450 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8m4l6"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.469276 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8m4l6" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.470562 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mr7cp"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.470916 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mr7cp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.472281 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lndnt"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.473233 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fpqvc"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.475507 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-gz7kg"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.475832 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l9nqd"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.476221 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l9nqd" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.476741 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-gz7kg" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.479609 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wvttb"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.480363 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wvttb" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.480915 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490780-8q6q4"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.481684 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-8q6q4" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.483168 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrhjv"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.483768 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrhjv" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.484671 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cjzsq"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.485338 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjzsq" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.486437 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vmwvg"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.486943 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vmwvg" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.488017 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wxbdh"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.488149 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg98g\" (UniqueName: \"kubernetes.io/projected/a6d331bd-2db3-4319-9f5c-db56d408d9e3-kube-api-access-lg98g\") pod \"apiserver-76f77b778f-6rlxp\" (UID: \"a6d331bd-2db3-4319-9f5c-db56d408d9e3\") " pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.488196 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/81c2f96b-55e0-483b-b72c-df7e156e9218-audit-policies\") pod \"apiserver-7bbb656c7d-6cghs\" (UID: \"81c2f96b-55e0-483b-b72c-df7e156e9218\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cghs" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.488446 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.488515 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7e5d16a-d45d-4b40-93bf-bfaa6be2d1c0-service-ca-bundle\") pod \"authentication-operator-69f744f599-jdjcq\" (UID: \"a7e5d16a-d45d-4b40-93bf-bfaa6be2d1c0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jdjcq" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.488633 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54a246a2-f674-4735-b295-b56699ece95b-config\") pod \"machine-approver-56656f9798-962cr\" (UID: \"54a246a2-f674-4735-b295-b56699ece95b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-962cr" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.488755 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr8gn\" (UniqueName: \"kubernetes.io/projected/1afc0f4c-e02d-4a70-aaba-e761e8c04eee-kube-api-access-mr8gn\") pod \"controller-manager-879f6c89f-lndnt\" (UID: \"1afc0f4c-e02d-4a70-aaba-e761e8c04eee\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lndnt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.489354 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42c0e428-821f-45a1-85a7-54ebdb81ef1c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cl895\" (UID: \"42c0e428-821f-45a1-85a7-54ebdb81ef1c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cl895" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.489386 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa11789e-7a2a-4dbf-85ca-c20a9d64a1f4-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6snv6\" (UID: \"fa11789e-7a2a-4dbf-85ca-c20a9d64a1f4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6snv6" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.489401 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf1f11c8-17b8-49b7-b12d-92891f478222-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-7p5jt\" (UID: \"bf1f11c8-17b8-49b7-b12d-92891f478222\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7p5jt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.489417 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpfwk\" (UniqueName: \"kubernetes.io/projected/5fe6baed-ab97-4d8a-8be2-6f00f9698136-kube-api-access-hpfwk\") pod \"route-controller-manager-6576b87f9c-fpqvc\" (UID: \"5fe6baed-ab97-4d8a-8be2-6f00f9698136\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fpqvc" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.489437 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5c244eff-aada-44f3-b250-96878a3400c4-etcd-client\") pod \"etcd-operator-b45778765-27vjc\" (UID: \"5c244eff-aada-44f3-b250-96878a3400c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-27vjc" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.489454 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf1f11c8-17b8-49b7-b12d-92891f478222-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-7p5jt\" (UID: \"bf1f11c8-17b8-49b7-b12d-92891f478222\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7p5jt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.489472 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.489489 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b28404ed-2e71-4b3f-9140-35ee89dbc8f2-console-config\") pod \"console-f9d7485db-6qgnn\" (UID: \"b28404ed-2e71-4b3f-9140-35ee89dbc8f2\") " pod="openshift-console/console-f9d7485db-6qgnn" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.489514 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6d331bd-2db3-4319-9f5c-db56d408d9e3-serving-cert\") pod \"apiserver-76f77b778f-6rlxp\" (UID: \"a6d331bd-2db3-4319-9f5c-db56d408d9e3\") " pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.489551 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.489566 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.489655 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7e5d16a-d45d-4b40-93bf-bfaa6be2d1c0-serving-cert\") pod \"authentication-operator-69f744f599-jdjcq\" (UID: \"a7e5d16a-d45d-4b40-93bf-bfaa6be2d1c0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jdjcq" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.489693 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fe6baed-ab97-4d8a-8be2-6f00f9698136-config\") pod \"route-controller-manager-6576b87f9c-fpqvc\" (UID: \"5fe6baed-ab97-4d8a-8be2-6f00f9698136\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fpqvc" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.489723 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.489821 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b28404ed-2e71-4b3f-9140-35ee89dbc8f2-oauth-serving-cert\") pod \"console-f9d7485db-6qgnn\" (UID: \"b28404ed-2e71-4b3f-9140-35ee89dbc8f2\") " pod="openshift-console/console-f9d7485db-6qgnn" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.490070 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42c0e428-821f-45a1-85a7-54ebdb81ef1c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cl895\" (UID: \"42c0e428-821f-45a1-85a7-54ebdb81ef1c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cl895" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.490098 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqhmb\" (UniqueName: \"kubernetes.io/projected/81c2f96b-55e0-483b-b72c-df7e156e9218-kube-api-access-rqhmb\") pod \"apiserver-7bbb656c7d-6cghs\" (UID: \"81c2f96b-55e0-483b-b72c-df7e156e9218\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cghs" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.490119 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/149e3000-35d7-47bd-83f0-00ab5e0736c2-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-q7j7b\" (UID: \"149e3000-35d7-47bd-83f0-00ab5e0736c2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q7j7b" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.490235 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa11789e-7a2a-4dbf-85ca-c20a9d64a1f4-config\") pod \"kube-controller-manager-operator-78b949d7b-6snv6\" (UID: \"fa11789e-7a2a-4dbf-85ca-c20a9d64a1f4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6snv6" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.490282 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2ba3cf6a-a6be-4108-a155-c8bb530aa037-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5bjl7\" (UID: \"2ba3cf6a-a6be-4108-a155-c8bb530aa037\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5bjl7" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.490308 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a6d331bd-2db3-4319-9f5c-db56d408d9e3-image-import-ca\") pod \"apiserver-76f77b778f-6rlxp\" (UID: \"a6d331bd-2db3-4319-9f5c-db56d408d9e3\") " pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.490343 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/81c2f96b-55e0-483b-b72c-df7e156e9218-audit-dir\") pod \"apiserver-7bbb656c7d-6cghs\" (UID: \"81c2f96b-55e0-483b-b72c-df7e156e9218\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cghs" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.490373 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7e5d16a-d45d-4b40-93bf-bfaa6be2d1c0-config\") pod \"authentication-operator-69f744f599-jdjcq\" (UID: \"a7e5d16a-d45d-4b40-93bf-bfaa6be2d1c0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jdjcq" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.490397 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/831dc87e-8e14-43d3-a36e-dc7679041ae5-config\") pod \"console-operator-58897d9998-4pbj2\" (UID: \"831dc87e-8e14-43d3-a36e-dc7679041ae5\") " pod="openshift-console-operator/console-operator-58897d9998-4pbj2" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.490430 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b28404ed-2e71-4b3f-9140-35ee89dbc8f2-trusted-ca-bundle\") pod \"console-f9d7485db-6qgnn\" (UID: \"b28404ed-2e71-4b3f-9140-35ee89dbc8f2\") " pod="openshift-console/console-f9d7485db-6qgnn" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.490468 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkl7p\" (UniqueName: \"kubernetes.io/projected/359660cd-b412-4640-bedf-993e976e7b3c-kube-api-access-rkl7p\") pod \"openshift-apiserver-operator-796bbdcf4f-88lkr\" (UID: \"359660cd-b412-4640-bedf-993e976e7b3c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-88lkr" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.490499 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qcw5\" (UniqueName: \"kubernetes.io/projected/bf1f11c8-17b8-49b7-b12d-92891f478222-kube-api-access-9qcw5\") pod \"cluster-image-registry-operator-dc59b4c8b-7p5jt\" (UID: \"bf1f11c8-17b8-49b7-b12d-92891f478222\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7p5jt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.490541 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2wtm\" (UniqueName: \"kubernetes.io/projected/0c1af7db-aa80-4cb0-a9cb-5afdf677f28c-kube-api-access-v2wtm\") pod \"cluster-samples-operator-665b6dd947-4w5bf\" (UID: \"0c1af7db-aa80-4cb0-a9cb-5afdf677f28c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4w5bf" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.490568 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkngl\" (UniqueName: \"kubernetes.io/projected/5c244eff-aada-44f3-b250-96878a3400c4-kube-api-access-nkngl\") pod \"etcd-operator-b45778765-27vjc\" (UID: \"5c244eff-aada-44f3-b250-96878a3400c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-27vjc" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.490593 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/69008ed1-f3e5-400d-852f-adbcd94199f6-audit-dir\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.490612 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1afc0f4c-e02d-4a70-aaba-e761e8c04eee-client-ca\") pod \"controller-manager-879f6c89f-lndnt\" (UID: \"1afc0f4c-e02d-4a70-aaba-e761e8c04eee\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lndnt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.490646 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f9b8f57e-00b9-4355-ace2-0319d320d208-webhook-cert\") pod \"packageserver-d55dfcdfc-mr7cp\" (UID: \"f9b8f57e-00b9-4355-ace2-0319d320d208\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mr7cp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.490654 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2ba3cf6a-a6be-4108-a155-c8bb530aa037-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5bjl7\" (UID: \"2ba3cf6a-a6be-4108-a155-c8bb530aa037\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5bjl7" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.490674 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vq4z\" (UniqueName: \"kubernetes.io/projected/a7e5d16a-d45d-4b40-93bf-bfaa6be2d1c0-kube-api-access-7vq4z\") pod \"authentication-operator-69f744f599-jdjcq\" (UID: \"a7e5d16a-d45d-4b40-93bf-bfaa6be2d1c0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jdjcq" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.490707 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0c1af7db-aa80-4cb0-a9cb-5afdf677f28c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4w5bf\" (UID: \"0c1af7db-aa80-4cb0-a9cb-5afdf677f28c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4w5bf" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.490730 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42c0e428-821f-45a1-85a7-54ebdb81ef1c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cl895\" (UID: \"42c0e428-821f-45a1-85a7-54ebdb81ef1c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cl895" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.490763 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgb6k\" (UniqueName: \"kubernetes.io/projected/94291fa4-24a5-499e-8143-89c8784d9284-kube-api-access-hgb6k\") pod \"downloads-7954f5f757-7l927\" (UID: \"94291fa4-24a5-499e-8143-89c8784d9284\") " pod="openshift-console/downloads-7954f5f757-7l927" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.490785 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ddc2e6b7-5582-4579-bf2c-ed165b74c91a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vmwvg\" (UID: \"ddc2e6b7-5582-4579-bf2c-ed165b74c91a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vmwvg" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.490818 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81c2f96b-55e0-483b-b72c-df7e156e9218-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6cghs\" (UID: \"81c2f96b-55e0-483b-b72c-df7e156e9218\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cghs" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.490834 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjhr5\" (UniqueName: \"kubernetes.io/projected/beb6f283-75cb-4184-b985-4e6c095feca1-kube-api-access-mjhr5\") pod \"multus-admission-controller-857f4d67dd-ddghz\" (UID: \"beb6f283-75cb-4184-b985-4e6c095feca1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ddghz" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.490850 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05d74105-0ecd-41ac-9001-8b21b0fd6ba4-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-l9nqd\" (UID: \"05d74105-0ecd-41ac-9001-8b21b0fd6ba4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l9nqd" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.490865 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/85f05bd5-ff83-4d29-9531-ab3499088095-metrics-certs\") pod \"router-default-5444994796-h9b2g\" (UID: \"85f05bd5-ff83-4d29-9531-ab3499088095\") " pod="openshift-ingress/router-default-5444994796-h9b2g" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.490882 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7lk4\" (UniqueName: \"kubernetes.io/projected/85f05bd5-ff83-4d29-9531-ab3499088095-kube-api-access-x7lk4\") pod \"router-default-5444994796-h9b2g\" (UID: \"85f05bd5-ff83-4d29-9531-ab3499088095\") " pod="openshift-ingress/router-default-5444994796-h9b2g" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.490897 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/149e3000-35d7-47bd-83f0-00ab5e0736c2-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-q7j7b\" (UID: \"149e3000-35d7-47bd-83f0-00ab5e0736c2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q7j7b" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.490913 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6d331bd-2db3-4319-9f5c-db56d408d9e3-config\") pod \"apiserver-76f77b778f-6rlxp\" (UID: \"a6d331bd-2db3-4319-9f5c-db56d408d9e3\") " pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.490928 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ddc2e6b7-5582-4579-bf2c-ed165b74c91a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vmwvg\" (UID: \"ddc2e6b7-5582-4579-bf2c-ed165b74c91a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vmwvg" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.490946 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tmd5\" (UniqueName: \"kubernetes.io/projected/2ba3cf6a-a6be-4108-a155-c8bb530aa037-kube-api-access-6tmd5\") pod \"openshift-config-operator-7777fb866f-5bjl7\" (UID: \"2ba3cf6a-a6be-4108-a155-c8bb530aa037\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5bjl7" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.490961 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a6d331bd-2db3-4319-9f5c-db56d408d9e3-etcd-client\") pod \"apiserver-76f77b778f-6rlxp\" (UID: \"a6d331bd-2db3-4319-9f5c-db56d408d9e3\") " pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.490979 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/54a246a2-f674-4735-b295-b56699ece95b-machine-approver-tls\") pod \"machine-approver-56656f9798-962cr\" (UID: \"54a246a2-f674-4735-b295-b56699ece95b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-962cr" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491001 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491017 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/54a246a2-f674-4735-b295-b56699ece95b-auth-proxy-config\") pod \"machine-approver-56656f9798-962cr\" (UID: \"54a246a2-f674-4735-b295-b56699ece95b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-962cr" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491053 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6fxv\" (UniqueName: \"kubernetes.io/projected/f9b8f57e-00b9-4355-ace2-0319d320d208-kube-api-access-d6fxv\") pod \"packageserver-d55dfcdfc-mr7cp\" (UID: \"f9b8f57e-00b9-4355-ace2-0319d320d208\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mr7cp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491069 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/831dc87e-8e14-43d3-a36e-dc7679041ae5-trusted-ca\") pod \"console-operator-58897d9998-4pbj2\" (UID: \"831dc87e-8e14-43d3-a36e-dc7679041ae5\") " pod="openshift-console-operator/console-operator-58897d9998-4pbj2" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491082 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f9b8f57e-00b9-4355-ace2-0319d320d208-tmpfs\") pod \"packageserver-d55dfcdfc-mr7cp\" (UID: \"f9b8f57e-00b9-4355-ace2-0319d320d208\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mr7cp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491102 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/033cb12f-278f-431a-8104-519db9a3152f-signing-key\") pod \"service-ca-9c57cc56f-gz7kg\" (UID: \"033cb12f-278f-431a-8104-519db9a3152f\") " pod="openshift-service-ca/service-ca-9c57cc56f-gz7kg" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491117 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5c244eff-aada-44f3-b250-96878a3400c4-etcd-service-ca\") pod \"etcd-operator-b45778765-27vjc\" (UID: \"5c244eff-aada-44f3-b250-96878a3400c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-27vjc" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491155 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7e5d16a-d45d-4b40-93bf-bfaa6be2d1c0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jdjcq\" (UID: \"a7e5d16a-d45d-4b40-93bf-bfaa6be2d1c0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jdjcq" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491169 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2274g\" (UniqueName: \"kubernetes.io/projected/54a246a2-f674-4735-b295-b56699ece95b-kube-api-access-2274g\") pod \"machine-approver-56656f9798-962cr\" (UID: \"54a246a2-f674-4735-b295-b56699ece95b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-962cr" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491364 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/85f05bd5-ff83-4d29-9531-ab3499088095-stats-auth\") pod \"router-default-5444994796-h9b2g\" (UID: \"85f05bd5-ff83-4d29-9531-ab3499088095\") " pod="openshift-ingress/router-default-5444994796-h9b2g" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491380 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2wnd\" (UniqueName: \"kubernetes.io/projected/831dc87e-8e14-43d3-a36e-dc7679041ae5-kube-api-access-d2wnd\") pod \"console-operator-58897d9998-4pbj2\" (UID: \"831dc87e-8e14-43d3-a36e-dc7679041ae5\") " pod="openshift-console-operator/console-operator-58897d9998-4pbj2" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491395 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77a97acb-2908-48fb-8bcd-0647f3e90160-config\") pod \"machine-api-operator-5694c8668f-7xb2b\" (UID: \"77a97acb-2908-48fb-8bcd-0647f3e90160\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7xb2b" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491409 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c475g\" (UniqueName: \"kubernetes.io/projected/77a97acb-2908-48fb-8bcd-0647f3e90160-kube-api-access-c475g\") pod \"machine-api-operator-5694c8668f-7xb2b\" (UID: \"77a97acb-2908-48fb-8bcd-0647f3e90160\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7xb2b" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491425 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491438 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b28404ed-2e71-4b3f-9140-35ee89dbc8f2-console-oauth-config\") pod \"console-f9d7485db-6qgnn\" (UID: \"b28404ed-2e71-4b3f-9140-35ee89dbc8f2\") " pod="openshift-console/console-f9d7485db-6qgnn" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491452 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b28404ed-2e71-4b3f-9140-35ee89dbc8f2-service-ca\") pod \"console-f9d7485db-6qgnn\" (UID: \"b28404ed-2e71-4b3f-9140-35ee89dbc8f2\") " pod="openshift-console/console-f9d7485db-6qgnn" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491464 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f9b8f57e-00b9-4355-ace2-0319d320d208-apiservice-cert\") pod \"packageserver-d55dfcdfc-mr7cp\" (UID: \"f9b8f57e-00b9-4355-ace2-0319d320d208\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mr7cp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491478 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/beb6f283-75cb-4184-b985-4e6c095feca1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-ddghz\" (UID: \"beb6f283-75cb-4184-b985-4e6c095feca1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ddghz" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491491 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/69008ed1-f3e5-400d-852f-adbcd94199f6-audit-policies\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491505 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491518 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b28404ed-2e71-4b3f-9140-35ee89dbc8f2-console-serving-cert\") pod \"console-f9d7485db-6qgnn\" (UID: \"b28404ed-2e71-4b3f-9140-35ee89dbc8f2\") " pod="openshift-console/console-f9d7485db-6qgnn" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491568 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/77a97acb-2908-48fb-8bcd-0647f3e90160-images\") pod \"machine-api-operator-5694c8668f-7xb2b\" (UID: \"77a97acb-2908-48fb-8bcd-0647f3e90160\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7xb2b" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491585 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/77a97acb-2908-48fb-8bcd-0647f3e90160-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-7xb2b\" (UID: \"77a97acb-2908-48fb-8bcd-0647f3e90160\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7xb2b" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491600 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491616 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5c244eff-aada-44f3-b250-96878a3400c4-etcd-ca\") pod \"etcd-operator-b45778765-27vjc\" (UID: \"5c244eff-aada-44f3-b250-96878a3400c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-27vjc" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491630 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/359660cd-b412-4640-bedf-993e976e7b3c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-88lkr\" (UID: \"359660cd-b412-4640-bedf-993e976e7b3c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-88lkr" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491646 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddc2e6b7-5582-4579-bf2c-ed165b74c91a-config\") pod \"kube-apiserver-operator-766d6c64bb-vmwvg\" (UID: \"ddc2e6b7-5582-4579-bf2c-ed165b74c91a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vmwvg" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491662 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ba3cf6a-a6be-4108-a155-c8bb530aa037-serving-cert\") pod \"openshift-config-operator-7777fb866f-5bjl7\" (UID: \"2ba3cf6a-a6be-4108-a155-c8bb530aa037\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5bjl7" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491677 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c244eff-aada-44f3-b250-96878a3400c4-serving-cert\") pod \"etcd-operator-b45778765-27vjc\" (UID: \"5c244eff-aada-44f3-b250-96878a3400c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-27vjc" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491693 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81c2f96b-55e0-483b-b72c-df7e156e9218-serving-cert\") pod \"apiserver-7bbb656c7d-6cghs\" (UID: \"81c2f96b-55e0-483b-b72c-df7e156e9218\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cghs" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491711 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491733 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c244eff-aada-44f3-b250-96878a3400c4-config\") pod \"etcd-operator-b45778765-27vjc\" (UID: \"5c244eff-aada-44f3-b250-96878a3400c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-27vjc" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491753 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/81c2f96b-55e0-483b-b72c-df7e156e9218-encryption-config\") pod \"apiserver-7bbb656c7d-6cghs\" (UID: \"81c2f96b-55e0-483b-b72c-df7e156e9218\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cghs" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491773 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491794 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa11789e-7a2a-4dbf-85ca-c20a9d64a1f4-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6snv6\" (UID: \"fa11789e-7a2a-4dbf-85ca-c20a9d64a1f4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6snv6" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491815 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a6d331bd-2db3-4319-9f5c-db56d408d9e3-encryption-config\") pod \"apiserver-76f77b778f-6rlxp\" (UID: \"a6d331bd-2db3-4319-9f5c-db56d408d9e3\") " pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491834 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf1f11c8-17b8-49b7-b12d-92891f478222-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-7p5jt\" (UID: \"bf1f11c8-17b8-49b7-b12d-92891f478222\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7p5jt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491866 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf2b2\" (UniqueName: \"kubernetes.io/projected/69008ed1-f3e5-400d-852f-adbcd94199f6-kube-api-access-kf2b2\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491900 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1afc0f4c-e02d-4a70-aaba-e761e8c04eee-serving-cert\") pod \"controller-manager-879f6c89f-lndnt\" (UID: \"1afc0f4c-e02d-4a70-aaba-e761e8c04eee\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lndnt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491922 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a6d331bd-2db3-4319-9f5c-db56d408d9e3-node-pullsecrets\") pod \"apiserver-76f77b778f-6rlxp\" (UID: \"a6d331bd-2db3-4319-9f5c-db56d408d9e3\") " pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491941 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a6d331bd-2db3-4319-9f5c-db56d408d9e3-audit\") pod \"apiserver-76f77b778f-6rlxp\" (UID: \"a6d331bd-2db3-4319-9f5c-db56d408d9e3\") " pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491961 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/81c2f96b-55e0-483b-b72c-df7e156e9218-etcd-client\") pod \"apiserver-7bbb656c7d-6cghs\" (UID: \"81c2f96b-55e0-483b-b72c-df7e156e9218\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cghs" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.491982 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/831dc87e-8e14-43d3-a36e-dc7679041ae5-serving-cert\") pod \"console-operator-58897d9998-4pbj2\" (UID: \"831dc87e-8e14-43d3-a36e-dc7679041ae5\") " pod="openshift-console-operator/console-operator-58897d9998-4pbj2" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.492003 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/85f05bd5-ff83-4d29-9531-ab3499088095-default-certificate\") pod \"router-default-5444994796-h9b2g\" (UID: \"85f05bd5-ff83-4d29-9531-ab3499088095\") " pod="openshift-ingress/router-default-5444994796-h9b2g" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.492026 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a6d331bd-2db3-4319-9f5c-db56d408d9e3-etcd-serving-ca\") pod \"apiserver-76f77b778f-6rlxp\" (UID: \"a6d331bd-2db3-4319-9f5c-db56d408d9e3\") " pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.492046 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fe6baed-ab97-4d8a-8be2-6f00f9698136-serving-cert\") pod \"route-controller-manager-6576b87f9c-fpqvc\" (UID: \"5fe6baed-ab97-4d8a-8be2-6f00f9698136\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fpqvc" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.492068 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.492092 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5fe6baed-ab97-4d8a-8be2-6f00f9698136-client-ca\") pod \"route-controller-manager-6576b87f9c-fpqvc\" (UID: \"5fe6baed-ab97-4d8a-8be2-6f00f9698136\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fpqvc" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.492112 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1afc0f4c-e02d-4a70-aaba-e761e8c04eee-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lndnt\" (UID: \"1afc0f4c-e02d-4a70-aaba-e761e8c04eee\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lndnt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.492134 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvkhn\" (UniqueName: \"kubernetes.io/projected/b28404ed-2e71-4b3f-9140-35ee89dbc8f2-kube-api-access-dvkhn\") pod \"console-f9d7485db-6qgnn\" (UID: \"b28404ed-2e71-4b3f-9140-35ee89dbc8f2\") " pod="openshift-console/console-f9d7485db-6qgnn" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.492155 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/033cb12f-278f-431a-8104-519db9a3152f-signing-cabundle\") pod \"service-ca-9c57cc56f-gz7kg\" (UID: \"033cb12f-278f-431a-8104-519db9a3152f\") " pod="openshift-service-ca/service-ca-9c57cc56f-gz7kg" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.492175 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6d331bd-2db3-4319-9f5c-db56d408d9e3-trusted-ca-bundle\") pod \"apiserver-76f77b778f-6rlxp\" (UID: \"a6d331bd-2db3-4319-9f5c-db56d408d9e3\") " pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.492196 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5scg\" (UniqueName: \"kubernetes.io/projected/033cb12f-278f-431a-8104-519db9a3152f-kube-api-access-s5scg\") pod \"service-ca-9c57cc56f-gz7kg\" (UID: \"033cb12f-278f-431a-8104-519db9a3152f\") " pod="openshift-service-ca/service-ca-9c57cc56f-gz7kg" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.492214 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdh5l\" (UniqueName: \"kubernetes.io/projected/149e3000-35d7-47bd-83f0-00ab5e0736c2-kube-api-access-mdh5l\") pod \"kube-storage-version-migrator-operator-b67b599dd-q7j7b\" (UID: \"149e3000-35d7-47bd-83f0-00ab5e0736c2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q7j7b" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.492232 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85f05bd5-ff83-4d29-9531-ab3499088095-service-ca-bundle\") pod \"router-default-5444994796-h9b2g\" (UID: \"85f05bd5-ff83-4d29-9531-ab3499088095\") " pod="openshift-ingress/router-default-5444994796-h9b2g" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.492251 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05d74105-0ecd-41ac-9001-8b21b0fd6ba4-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-l9nqd\" (UID: \"05d74105-0ecd-41ac-9001-8b21b0fd6ba4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l9nqd" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.492272 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a6d331bd-2db3-4319-9f5c-db56d408d9e3-audit-dir\") pod \"apiserver-76f77b778f-6rlxp\" (UID: \"a6d331bd-2db3-4319-9f5c-db56d408d9e3\") " pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.492291 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/81c2f96b-55e0-483b-b72c-df7e156e9218-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6cghs\" (UID: \"81c2f96b-55e0-483b-b72c-df7e156e9218\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cghs" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.492312 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/359660cd-b412-4640-bedf-993e976e7b3c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-88lkr\" (UID: \"359660cd-b412-4640-bedf-993e976e7b3c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-88lkr" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.492331 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1afc0f4c-e02d-4a70-aaba-e761e8c04eee-config\") pod \"controller-manager-879f6c89f-lndnt\" (UID: \"1afc0f4c-e02d-4a70-aaba-e761e8c04eee\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lndnt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.494960 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-7l927"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.496953 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.497120 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-z7cgq"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.497902 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-z7cgq" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.499997 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-fbsj7"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.500407 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-fbsj7" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.501050 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ba3cf6a-a6be-4108-a155-c8bb530aa037-serving-cert\") pod \"openshift-config-operator-7777fb866f-5bjl7\" (UID: \"2ba3cf6a-a6be-4108-a155-c8bb530aa037\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5bjl7" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.502055 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cb8nk"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.503833 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-6rlxp"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.505611 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7p5jt"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.508574 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-58fcz"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.509783 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4w5bf"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.511820 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-zzxln"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.514340 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cl895"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.515956 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4pbj2"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.519450 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.519869 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-2sfhr"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.521904 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5bjl7"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.522808 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mr7cp"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.524060 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-27vjc"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.525403 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-7xb2b"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.526786 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jdjcq"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.534207 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k662z"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.536496 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-ddghz"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.537383 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.538024 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qdmxz"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.539666 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-6qgnn"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.541667 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6cghs"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.543466 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nn46h"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.544777 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vmwvg"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.546517 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wxbdh"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.548399 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8m4l6"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.550491 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6snv6"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.551425 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-88lkr"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.553235 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-gz7kg"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.554615 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l9nqd"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.556805 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.556872 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q7j7b"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.557677 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrhjv"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.559074 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-vfm8t"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.560431 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-vfm8t" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.560769 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-c9qdp"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.561294 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-c9qdp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.561956 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-fbsj7"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.563301 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cjzsq"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.564760 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-z7cgq"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.566190 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wvttb"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.567598 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490780-8q6q4"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.569111 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-vfm8t"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.570422 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-dgcqn"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.571167 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-dgcqn" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.571566 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-dgcqn"] Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.575619 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.593558 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdh5l\" (UniqueName: \"kubernetes.io/projected/149e3000-35d7-47bd-83f0-00ab5e0736c2-kube-api-access-mdh5l\") pod \"kube-storage-version-migrator-operator-b67b599dd-q7j7b\" (UID: \"149e3000-35d7-47bd-83f0-00ab5e0736c2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q7j7b" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.593605 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1afc0f4c-e02d-4a70-aaba-e761e8c04eee-config\") pod \"controller-manager-879f6c89f-lndnt\" (UID: \"1afc0f4c-e02d-4a70-aaba-e761e8c04eee\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lndnt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.593630 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85f05bd5-ff83-4d29-9531-ab3499088095-service-ca-bundle\") pod \"router-default-5444994796-h9b2g\" (UID: \"85f05bd5-ff83-4d29-9531-ab3499088095\") " pod="openshift-ingress/router-default-5444994796-h9b2g" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.593652 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a6d331bd-2db3-4319-9f5c-db56d408d9e3-audit-dir\") pod \"apiserver-76f77b778f-6rlxp\" (UID: \"a6d331bd-2db3-4319-9f5c-db56d408d9e3\") " pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.593676 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/81c2f96b-55e0-483b-b72c-df7e156e9218-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6cghs\" (UID: \"81c2f96b-55e0-483b-b72c-df7e156e9218\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cghs" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.593696 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/81c2f96b-55e0-483b-b72c-df7e156e9218-audit-policies\") pod \"apiserver-7bbb656c7d-6cghs\" (UID: \"81c2f96b-55e0-483b-b72c-df7e156e9218\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cghs" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.593712 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7e5d16a-d45d-4b40-93bf-bfaa6be2d1c0-service-ca-bundle\") pod \"authentication-operator-69f744f599-jdjcq\" (UID: \"a7e5d16a-d45d-4b40-93bf-bfaa6be2d1c0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jdjcq" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.593813 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a6d331bd-2db3-4319-9f5c-db56d408d9e3-audit-dir\") pod \"apiserver-76f77b778f-6rlxp\" (UID: \"a6d331bd-2db3-4319-9f5c-db56d408d9e3\") " pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.594594 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/81c2f96b-55e0-483b-b72c-df7e156e9218-audit-policies\") pod \"apiserver-7bbb656c7d-6cghs\" (UID: \"81c2f96b-55e0-483b-b72c-df7e156e9218\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cghs" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.594663 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54a246a2-f674-4735-b295-b56699ece95b-config\") pod \"machine-approver-56656f9798-962cr\" (UID: \"54a246a2-f674-4735-b295-b56699ece95b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-962cr" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.594684 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr8gn\" (UniqueName: \"kubernetes.io/projected/1afc0f4c-e02d-4a70-aaba-e761e8c04eee-kube-api-access-mr8gn\") pod \"controller-manager-879f6c89f-lndnt\" (UID: \"1afc0f4c-e02d-4a70-aaba-e761e8c04eee\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lndnt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.594693 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/81c2f96b-55e0-483b-b72c-df7e156e9218-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6cghs\" (UID: \"81c2f96b-55e0-483b-b72c-df7e156e9218\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cghs" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.594775 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7e5d16a-d45d-4b40-93bf-bfaa6be2d1c0-service-ca-bundle\") pod \"authentication-operator-69f744f599-jdjcq\" (UID: \"a7e5d16a-d45d-4b40-93bf-bfaa6be2d1c0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jdjcq" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.595090 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54a246a2-f674-4735-b295-b56699ece95b-config\") pod \"machine-approver-56656f9798-962cr\" (UID: \"54a246a2-f674-4735-b295-b56699ece95b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-962cr" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.594700 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42c0e428-821f-45a1-85a7-54ebdb81ef1c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cl895\" (UID: \"42c0e428-821f-45a1-85a7-54ebdb81ef1c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cl895" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.595189 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lg98g\" (UniqueName: \"kubernetes.io/projected/a6d331bd-2db3-4319-9f5c-db56d408d9e3-kube-api-access-lg98g\") pod \"apiserver-76f77b778f-6rlxp\" (UID: \"a6d331bd-2db3-4319-9f5c-db56d408d9e3\") " pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.595206 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa11789e-7a2a-4dbf-85ca-c20a9d64a1f4-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6snv6\" (UID: \"fa11789e-7a2a-4dbf-85ca-c20a9d64a1f4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6snv6" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.595225 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf1f11c8-17b8-49b7-b12d-92891f478222-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-7p5jt\" (UID: \"bf1f11c8-17b8-49b7-b12d-92891f478222\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7p5jt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.595324 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1afc0f4c-e02d-4a70-aaba-e761e8c04eee-config\") pod \"controller-manager-879f6c89f-lndnt\" (UID: \"1afc0f4c-e02d-4a70-aaba-e761e8c04eee\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lndnt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.595379 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpfwk\" (UniqueName: \"kubernetes.io/projected/5fe6baed-ab97-4d8a-8be2-6f00f9698136-kube-api-access-hpfwk\") pod \"route-controller-manager-6576b87f9c-fpqvc\" (UID: \"5fe6baed-ab97-4d8a-8be2-6f00f9698136\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fpqvc" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.595399 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf1f11c8-17b8-49b7-b12d-92891f478222-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-7p5jt\" (UID: \"bf1f11c8-17b8-49b7-b12d-92891f478222\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7p5jt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.595522 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.595632 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6d331bd-2db3-4319-9f5c-db56d408d9e3-serving-cert\") pod \"apiserver-76f77b778f-6rlxp\" (UID: \"a6d331bd-2db3-4319-9f5c-db56d408d9e3\") " pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.595659 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.595682 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/abbeffe1-cfd5-4476-9a8e-2ab5b4869444-profile-collector-cert\") pod \"catalog-operator-68c6474976-nn46h\" (UID: \"abbeffe1-cfd5-4476-9a8e-2ab5b4869444\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nn46h" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.595699 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7e5d16a-d45d-4b40-93bf-bfaa6be2d1c0-serving-cert\") pod \"authentication-operator-69f744f599-jdjcq\" (UID: \"a7e5d16a-d45d-4b40-93bf-bfaa6be2d1c0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jdjcq" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.595715 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fe6baed-ab97-4d8a-8be2-6f00f9698136-config\") pod \"route-controller-manager-6576b87f9c-fpqvc\" (UID: \"5fe6baed-ab97-4d8a-8be2-6f00f9698136\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fpqvc" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.595740 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b28404ed-2e71-4b3f-9140-35ee89dbc8f2-oauth-serving-cert\") pod \"console-f9d7485db-6qgnn\" (UID: \"b28404ed-2e71-4b3f-9140-35ee89dbc8f2\") " pod="openshift-console/console-f9d7485db-6qgnn" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.595760 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wxzv\" (UniqueName: \"kubernetes.io/projected/17a72e73-4d54-4a29-a85a-ecb1aff30d10-kube-api-access-9wxzv\") pod \"olm-operator-6b444d44fb-k662z\" (UID: \"17a72e73-4d54-4a29-a85a-ecb1aff30d10\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k662z" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.595779 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa11789e-7a2a-4dbf-85ca-c20a9d64a1f4-config\") pod \"kube-controller-manager-operator-78b949d7b-6snv6\" (UID: \"fa11789e-7a2a-4dbf-85ca-c20a9d64a1f4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6snv6" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.595796 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/831dc87e-8e14-43d3-a36e-dc7679041ae5-config\") pod \"console-operator-58897d9998-4pbj2\" (UID: \"831dc87e-8e14-43d3-a36e-dc7679041ae5\") " pod="openshift-console-operator/console-operator-58897d9998-4pbj2" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.595810 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b28404ed-2e71-4b3f-9140-35ee89dbc8f2-trusted-ca-bundle\") pod \"console-f9d7485db-6qgnn\" (UID: \"b28404ed-2e71-4b3f-9140-35ee89dbc8f2\") " pod="openshift-console/console-f9d7485db-6qgnn" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.596084 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7e5d16a-d45d-4b40-93bf-bfaa6be2d1c0-config\") pod \"authentication-operator-69f744f599-jdjcq\" (UID: \"a7e5d16a-d45d-4b40-93bf-bfaa6be2d1c0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jdjcq" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.596114 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1afc0f4c-e02d-4a70-aaba-e761e8c04eee-client-ca\") pod \"controller-manager-879f6c89f-lndnt\" (UID: \"1afc0f4c-e02d-4a70-aaba-e761e8c04eee\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lndnt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.596132 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/437b5573-a342-4383-ba60-be0e3ccba839-node-bootstrap-token\") pod \"machine-config-server-c9qdp\" (UID: \"437b5573-a342-4383-ba60-be0e3ccba839\") " pod="openshift-machine-config-operator/machine-config-server-c9qdp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.596151 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkngl\" (UniqueName: \"kubernetes.io/projected/5c244eff-aada-44f3-b250-96878a3400c4-kube-api-access-nkngl\") pod \"etcd-operator-b45778765-27vjc\" (UID: \"5c244eff-aada-44f3-b250-96878a3400c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-27vjc" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.596176 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkl7p\" (UniqueName: \"kubernetes.io/projected/359660cd-b412-4640-bedf-993e976e7b3c-kube-api-access-rkl7p\") pod \"openshift-apiserver-operator-796bbdcf4f-88lkr\" (UID: \"359660cd-b412-4640-bedf-993e976e7b3c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-88lkr" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.596205 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6clr\" (UniqueName: \"kubernetes.io/projected/37a77f41-5dbf-4842-9e77-83dc22b50f4a-kube-api-access-w6clr\") pod \"migrator-59844c95c7-qdmxz\" (UID: \"37a77f41-5dbf-4842-9e77-83dc22b50f4a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdmxz" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.596230 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7-config-volume\") pod \"collect-profiles-29490780-8q6q4\" (UID: \"7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-8q6q4" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.596253 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vq4z\" (UniqueName: \"kubernetes.io/projected/a7e5d16a-d45d-4b40-93bf-bfaa6be2d1c0-kube-api-access-7vq4z\") pod \"authentication-operator-69f744f599-jdjcq\" (UID: \"a7e5d16a-d45d-4b40-93bf-bfaa6be2d1c0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jdjcq" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.596271 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42c0e428-821f-45a1-85a7-54ebdb81ef1c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cl895\" (UID: \"42c0e428-821f-45a1-85a7-54ebdb81ef1c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cl895" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.596293 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8qsf\" (UniqueName: \"kubernetes.io/projected/2d37efbf-d18f-486b-9b43-bc4d181af4ca-kube-api-access-b8qsf\") pod \"marketplace-operator-79b997595-wvttb\" (UID: \"2d37efbf-d18f-486b-9b43-bc4d181af4ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-wvttb" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.596317 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc4dg\" (UniqueName: \"kubernetes.io/projected/abbeffe1-cfd5-4476-9a8e-2ab5b4869444-kube-api-access-hc4dg\") pod \"catalog-operator-68c6474976-nn46h\" (UID: \"abbeffe1-cfd5-4476-9a8e-2ab5b4869444\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nn46h" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.597031 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7e5d16a-d45d-4b40-93bf-bfaa6be2d1c0-config\") pod \"authentication-operator-69f744f599-jdjcq\" (UID: \"a7e5d16a-d45d-4b40-93bf-bfaa6be2d1c0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jdjcq" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.597154 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf1f11c8-17b8-49b7-b12d-92891f478222-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-7p5jt\" (UID: \"bf1f11c8-17b8-49b7-b12d-92891f478222\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7p5jt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.597412 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/831dc87e-8e14-43d3-a36e-dc7679041ae5-config\") pod \"console-operator-58897d9998-4pbj2\" (UID: \"831dc87e-8e14-43d3-a36e-dc7679041ae5\") " pod="openshift-console-operator/console-operator-58897d9998-4pbj2" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.597477 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b28404ed-2e71-4b3f-9140-35ee89dbc8f2-oauth-serving-cert\") pod \"console-f9d7485db-6qgnn\" (UID: \"b28404ed-2e71-4b3f-9140-35ee89dbc8f2\") " pod="openshift-console/console-f9d7485db-6qgnn" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.596515 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6d331bd-2db3-4319-9f5c-db56d408d9e3-config\") pod \"apiserver-76f77b778f-6rlxp\" (UID: \"a6d331bd-2db3-4319-9f5c-db56d408d9e3\") " pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.597660 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b28404ed-2e71-4b3f-9140-35ee89dbc8f2-trusted-ca-bundle\") pod \"console-f9d7485db-6qgnn\" (UID: \"b28404ed-2e71-4b3f-9140-35ee89dbc8f2\") " pod="openshift-console/console-f9d7485db-6qgnn" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.597693 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgzl5\" (UniqueName: \"kubernetes.io/projected/c8657575-cd22-4ebc-ae9d-4174366985d3-kube-api-access-fgzl5\") pod \"csi-hostpathplugin-vfm8t\" (UID: \"c8657575-cd22-4ebc-ae9d-4174366985d3\") " pod="hostpath-provisioner/csi-hostpathplugin-vfm8t" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.597758 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ddc2e6b7-5582-4579-bf2c-ed165b74c91a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vmwvg\" (UID: \"ddc2e6b7-5582-4579-bf2c-ed165b74c91a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vmwvg" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.597788 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1afc0f4c-e02d-4a70-aaba-e761e8c04eee-client-ca\") pod \"controller-manager-879f6c89f-lndnt\" (UID: \"1afc0f4c-e02d-4a70-aaba-e761e8c04eee\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lndnt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.597883 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c8657575-cd22-4ebc-ae9d-4174366985d3-csi-data-dir\") pod \"csi-hostpathplugin-vfm8t\" (UID: \"c8657575-cd22-4ebc-ae9d-4174366985d3\") " pod="hostpath-provisioner/csi-hostpathplugin-vfm8t" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.597910 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a4d83db5-776f-4e95-a6fa-b194344f9819-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-2sfhr\" (UID: \"a4d83db5-776f-4e95-a6fa-b194344f9819\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2sfhr" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.597939 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/54a246a2-f674-4735-b295-b56699ece95b-auth-proxy-config\") pod \"machine-approver-56656f9798-962cr\" (UID: \"54a246a2-f674-4735-b295-b56699ece95b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-962cr" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.597957 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6fxv\" (UniqueName: \"kubernetes.io/projected/f9b8f57e-00b9-4355-ace2-0319d320d208-kube-api-access-d6fxv\") pod \"packageserver-d55dfcdfc-mr7cp\" (UID: \"f9b8f57e-00b9-4355-ace2-0319d320d208\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mr7cp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.597974 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f9b8f57e-00b9-4355-ace2-0319d320d208-tmpfs\") pod \"packageserver-d55dfcdfc-mr7cp\" (UID: \"f9b8f57e-00b9-4355-ace2-0319d320d208\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mr7cp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.597989 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5c244eff-aada-44f3-b250-96878a3400c4-etcd-service-ca\") pod \"etcd-operator-b45778765-27vjc\" (UID: \"5c244eff-aada-44f3-b250-96878a3400c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-27vjc" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.598006 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7e5d16a-d45d-4b40-93bf-bfaa6be2d1c0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jdjcq\" (UID: \"a7e5d16a-d45d-4b40-93bf-bfaa6be2d1c0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jdjcq" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.598024 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77a97acb-2908-48fb-8bcd-0647f3e90160-config\") pod \"machine-api-operator-5694c8668f-7xb2b\" (UID: \"77a97acb-2908-48fb-8bcd-0647f3e90160\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7xb2b" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.598039 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2274g\" (UniqueName: \"kubernetes.io/projected/54a246a2-f674-4735-b295-b56699ece95b-kube-api-access-2274g\") pod \"machine-approver-56656f9798-962cr\" (UID: \"54a246a2-f674-4735-b295-b56699ece95b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-962cr" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.598057 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4z5z\" (UniqueName: \"kubernetes.io/projected/05d74105-0ecd-41ac-9001-8b21b0fd6ba4-kube-api-access-m4z5z\") pod \"openshift-controller-manager-operator-756b6f6bc6-l9nqd\" (UID: \"05d74105-0ecd-41ac-9001-8b21b0fd6ba4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l9nqd" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.598076 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b28404ed-2e71-4b3f-9140-35ee89dbc8f2-console-oauth-config\") pod \"console-f9d7485db-6qgnn\" (UID: \"b28404ed-2e71-4b3f-9140-35ee89dbc8f2\") " pod="openshift-console/console-f9d7485db-6qgnn" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.598094 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.598110 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b28404ed-2e71-4b3f-9140-35ee89dbc8f2-service-ca\") pod \"console-f9d7485db-6qgnn\" (UID: \"b28404ed-2e71-4b3f-9140-35ee89dbc8f2\") " pod="openshift-console/console-f9d7485db-6qgnn" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.598131 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/beb6f283-75cb-4184-b985-4e6c095feca1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-ddghz\" (UID: \"beb6f283-75cb-4184-b985-4e6c095feca1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ddghz" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.598156 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2d37efbf-d18f-486b-9b43-bc4d181af4ca-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wvttb\" (UID: \"2d37efbf-d18f-486b-9b43-bc4d181af4ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-wvttb" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.598178 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/550752e4-a1d9-46f4-9118-9e9919b2fe6b-serving-cert\") pod \"service-ca-operator-777779d784-cjzsq\" (UID: \"550752e4-a1d9-46f4-9118-9e9919b2fe6b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjzsq" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.598196 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac10f013-cd1f-47e0-8f1c-5ff4e6e75784-config-volume\") pod \"dns-default-dgcqn\" (UID: \"ac10f013-cd1f-47e0-8f1c-5ff4e6e75784\") " pod="openshift-dns/dns-default-dgcqn" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.598226 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/69008ed1-f3e5-400d-852f-adbcd94199f6-audit-policies\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.598241 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.598255 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddc2e6b7-5582-4579-bf2c-ed165b74c91a-config\") pod \"kube-apiserver-operator-766d6c64bb-vmwvg\" (UID: \"ddc2e6b7-5582-4579-bf2c-ed165b74c91a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vmwvg" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.598269 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/77a97acb-2908-48fb-8bcd-0647f3e90160-images\") pod \"machine-api-operator-5694c8668f-7xb2b\" (UID: \"77a97acb-2908-48fb-8bcd-0647f3e90160\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7xb2b" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.598300 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/77a97acb-2908-48fb-8bcd-0647f3e90160-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-7xb2b\" (UID: \"77a97acb-2908-48fb-8bcd-0647f3e90160\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7xb2b" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.598317 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81c2f96b-55e0-483b-b72c-df7e156e9218-serving-cert\") pod \"apiserver-7bbb656c7d-6cghs\" (UID: \"81c2f96b-55e0-483b-b72c-df7e156e9218\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cghs" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.598332 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.598350 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c8657575-cd22-4ebc-ae9d-4174366985d3-mountpoint-dir\") pod \"csi-hostpathplugin-vfm8t\" (UID: \"c8657575-cd22-4ebc-ae9d-4174366985d3\") " pod="hostpath-provisioner/csi-hostpathplugin-vfm8t" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.598366 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c8657575-cd22-4ebc-ae9d-4174366985d3-plugins-dir\") pod \"csi-hostpathplugin-vfm8t\" (UID: \"c8657575-cd22-4ebc-ae9d-4174366985d3\") " pod="hostpath-provisioner/csi-hostpathplugin-vfm8t" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.598385 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c244eff-aada-44f3-b250-96878a3400c4-config\") pod \"etcd-operator-b45778765-27vjc\" (UID: \"5c244eff-aada-44f3-b250-96878a3400c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-27vjc" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.598476 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6d331bd-2db3-4319-9f5c-db56d408d9e3-config\") pod \"apiserver-76f77b778f-6rlxp\" (UID: \"a6d331bd-2db3-4319-9f5c-db56d408d9e3\") " pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.599241 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/54a246a2-f674-4735-b295-b56699ece95b-auth-proxy-config\") pod \"machine-approver-56656f9798-962cr\" (UID: \"54a246a2-f674-4735-b295-b56699ece95b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-962cr" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.599268 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa11789e-7a2a-4dbf-85ca-c20a9d64a1f4-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6snv6\" (UID: \"fa11789e-7a2a-4dbf-85ca-c20a9d64a1f4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6snv6" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.599332 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf2b2\" (UniqueName: \"kubernetes.io/projected/69008ed1-f3e5-400d-852f-adbcd94199f6-kube-api-access-kf2b2\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.599357 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a6d331bd-2db3-4319-9f5c-db56d408d9e3-node-pullsecrets\") pod \"apiserver-76f77b778f-6rlxp\" (UID: \"a6d331bd-2db3-4319-9f5c-db56d408d9e3\") " pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.599377 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1afc0f4c-e02d-4a70-aaba-e761e8c04eee-serving-cert\") pod \"controller-manager-879f6c89f-lndnt\" (UID: \"1afc0f4c-e02d-4a70-aaba-e761e8c04eee\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lndnt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.599396 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a6d331bd-2db3-4319-9f5c-db56d408d9e3-etcd-serving-ca\") pod \"apiserver-76f77b778f-6rlxp\" (UID: \"a6d331bd-2db3-4319-9f5c-db56d408d9e3\") " pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.599416 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2hpl\" (UniqueName: \"kubernetes.io/projected/a4d83db5-776f-4e95-a6fa-b194344f9819-kube-api-access-t2hpl\") pod \"machine-config-controller-84d6567774-2sfhr\" (UID: \"a4d83db5-776f-4e95-a6fa-b194344f9819\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2sfhr" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.599436 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.599459 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/129a0b30-7132-4e3c-ab84-208cae7cb2f2-proxy-tls\") pod \"machine-config-operator-74547568cd-zzxln\" (UID: \"129a0b30-7132-4e3c-ab84-208cae7cb2f2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zzxln" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.599521 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1afc0f4c-e02d-4a70-aaba-e761e8c04eee-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lndnt\" (UID: \"1afc0f4c-e02d-4a70-aaba-e761e8c04eee\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lndnt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.599573 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7e5d16a-d45d-4b40-93bf-bfaa6be2d1c0-serving-cert\") pod \"authentication-operator-69f744f599-jdjcq\" (UID: \"a7e5d16a-d45d-4b40-93bf-bfaa6be2d1c0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jdjcq" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.599578 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvkhn\" (UniqueName: \"kubernetes.io/projected/b28404ed-2e71-4b3f-9140-35ee89dbc8f2-kube-api-access-dvkhn\") pod \"console-f9d7485db-6qgnn\" (UID: \"b28404ed-2e71-4b3f-9140-35ee89dbc8f2\") " pod="openshift-console/console-f9d7485db-6qgnn" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.599635 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/77a97acb-2908-48fb-8bcd-0647f3e90160-images\") pod \"machine-api-operator-5694c8668f-7xb2b\" (UID: \"77a97acb-2908-48fb-8bcd-0647f3e90160\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7xb2b" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.599661 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a6d331bd-2db3-4319-9f5c-db56d408d9e3-node-pullsecrets\") pod \"apiserver-76f77b778f-6rlxp\" (UID: \"a6d331bd-2db3-4319-9f5c-db56d408d9e3\") " pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.600109 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.600369 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5c244eff-aada-44f3-b250-96878a3400c4-etcd-service-ca\") pod \"etcd-operator-b45778765-27vjc\" (UID: \"5c244eff-aada-44f3-b250-96878a3400c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-27vjc" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.600550 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c244eff-aada-44f3-b250-96878a3400c4-config\") pod \"etcd-operator-b45778765-27vjc\" (UID: \"5c244eff-aada-44f3-b250-96878a3400c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-27vjc" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.600634 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b28404ed-2e71-4b3f-9140-35ee89dbc8f2-service-ca\") pod \"console-f9d7485db-6qgnn\" (UID: \"b28404ed-2e71-4b3f-9140-35ee89dbc8f2\") " pod="openshift-console/console-f9d7485db-6qgnn" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.600806 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77a97acb-2908-48fb-8bcd-0647f3e90160-config\") pod \"machine-api-operator-5694c8668f-7xb2b\" (UID: \"77a97acb-2908-48fb-8bcd-0647f3e90160\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7xb2b" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.600924 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7e5d16a-d45d-4b40-93bf-bfaa6be2d1c0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jdjcq\" (UID: \"a7e5d16a-d45d-4b40-93bf-bfaa6be2d1c0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jdjcq" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.601134 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6d331bd-2db3-4319-9f5c-db56d408d9e3-serving-cert\") pod \"apiserver-76f77b778f-6rlxp\" (UID: \"a6d331bd-2db3-4319-9f5c-db56d408d9e3\") " pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.601268 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a6d331bd-2db3-4319-9f5c-db56d408d9e3-etcd-serving-ca\") pod \"apiserver-76f77b778f-6rlxp\" (UID: \"a6d331bd-2db3-4319-9f5c-db56d408d9e3\") " pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.601448 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1afc0f4c-e02d-4a70-aaba-e761e8c04eee-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lndnt\" (UID: \"1afc0f4c-e02d-4a70-aaba-e761e8c04eee\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lndnt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.601731 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.601871 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f9b8f57e-00b9-4355-ace2-0319d320d208-tmpfs\") pod \"packageserver-d55dfcdfc-mr7cp\" (UID: \"f9b8f57e-00b9-4355-ace2-0319d320d208\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mr7cp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.602044 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.602044 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/033cb12f-278f-431a-8104-519db9a3152f-signing-cabundle\") pod \"service-ca-9c57cc56f-gz7kg\" (UID: \"033cb12f-278f-431a-8104-519db9a3152f\") " pod="openshift-service-ca/service-ca-9c57cc56f-gz7kg" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.602146 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5scg\" (UniqueName: \"kubernetes.io/projected/033cb12f-278f-431a-8104-519db9a3152f-kube-api-access-s5scg\") pod \"service-ca-9c57cc56f-gz7kg\" (UID: \"033cb12f-278f-431a-8104-519db9a3152f\") " pod="openshift-service-ca/service-ca-9c57cc56f-gz7kg" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.602176 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6d331bd-2db3-4319-9f5c-db56d408d9e3-trusted-ca-bundle\") pod \"apiserver-76f77b778f-6rlxp\" (UID: \"a6d331bd-2db3-4319-9f5c-db56d408d9e3\") " pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.602199 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/359660cd-b412-4640-bedf-993e976e7b3c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-88lkr\" (UID: \"359660cd-b412-4640-bedf-993e976e7b3c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-88lkr" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.602223 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05d74105-0ecd-41ac-9001-8b21b0fd6ba4-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-l9nqd\" (UID: \"05d74105-0ecd-41ac-9001-8b21b0fd6ba4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l9nqd" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.602271 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5c244eff-aada-44f3-b250-96878a3400c4-etcd-client\") pod \"etcd-operator-b45778765-27vjc\" (UID: \"5c244eff-aada-44f3-b250-96878a3400c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-27vjc" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.602301 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ac10f013-cd1f-47e0-8f1c-5ff4e6e75784-metrics-tls\") pod \"dns-default-dgcqn\" (UID: \"ac10f013-cd1f-47e0-8f1c-5ff4e6e75784\") " pod="openshift-dns/dns-default-dgcqn" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.602329 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.602356 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b28404ed-2e71-4b3f-9140-35ee89dbc8f2-console-config\") pod \"console-f9d7485db-6qgnn\" (UID: \"b28404ed-2e71-4b3f-9140-35ee89dbc8f2\") " pod="openshift-console/console-f9d7485db-6qgnn" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.602388 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/004316da-16cd-49ab-b14d-282c28da6fad-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-8m4l6\" (UID: \"004316da-16cd-49ab-b14d-282c28da6fad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8m4l6" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.602418 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr9jp\" (UniqueName: \"kubernetes.io/projected/004316da-16cd-49ab-b14d-282c28da6fad-kube-api-access-nr9jp\") pod \"package-server-manager-789f6589d5-8m4l6\" (UID: \"004316da-16cd-49ab-b14d-282c28da6fad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8m4l6" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.602441 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/77a97acb-2908-48fb-8bcd-0647f3e90160-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-7xb2b\" (UID: \"77a97acb-2908-48fb-8bcd-0647f3e90160\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7xb2b" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.602554 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.602447 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.602622 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/550752e4-a1d9-46f4-9118-9e9919b2fe6b-config\") pod \"service-ca-operator-777779d784-cjzsq\" (UID: \"550752e4-a1d9-46f4-9118-9e9919b2fe6b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjzsq" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.602656 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.602677 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73de6ef2-e139-4185-9f56-9db885734ffe-bound-sa-token\") pod \"ingress-operator-5b745b69d9-58fcz\" (UID: \"73de6ef2-e139-4185-9f56-9db885734ffe\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-58fcz" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.602697 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42c0e428-821f-45a1-85a7-54ebdb81ef1c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cl895\" (UID: \"42c0e428-821f-45a1-85a7-54ebdb81ef1c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cl895" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.602716 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/81c2f96b-55e0-483b-b72c-df7e156e9218-audit-dir\") pod \"apiserver-7bbb656c7d-6cghs\" (UID: \"81c2f96b-55e0-483b-b72c-df7e156e9218\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cghs" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.602733 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqhmb\" (UniqueName: \"kubernetes.io/projected/81c2f96b-55e0-483b-b72c-df7e156e9218-kube-api-access-rqhmb\") pod \"apiserver-7bbb656c7d-6cghs\" (UID: \"81c2f96b-55e0-483b-b72c-df7e156e9218\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cghs" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.602759 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/149e3000-35d7-47bd-83f0-00ab5e0736c2-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-q7j7b\" (UID: \"149e3000-35d7-47bd-83f0-00ab5e0736c2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q7j7b" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.602776 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/73de6ef2-e139-4185-9f56-9db885734ffe-metrics-tls\") pod \"ingress-operator-5b745b69d9-58fcz\" (UID: \"73de6ef2-e139-4185-9f56-9db885734ffe\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-58fcz" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.602793 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a4d83db5-776f-4e95-a6fa-b194344f9819-proxy-tls\") pod \"machine-config-controller-84d6567774-2sfhr\" (UID: \"a4d83db5-776f-4e95-a6fa-b194344f9819\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2sfhr" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.602812 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a6d331bd-2db3-4319-9f5c-db56d408d9e3-image-import-ca\") pod \"apiserver-76f77b778f-6rlxp\" (UID: \"a6d331bd-2db3-4319-9f5c-db56d408d9e3\") " pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.602834 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v8bp\" (UniqueName: \"kubernetes.io/projected/ac10f013-cd1f-47e0-8f1c-5ff4e6e75784-kube-api-access-6v8bp\") pod \"dns-default-dgcqn\" (UID: \"ac10f013-cd1f-47e0-8f1c-5ff4e6e75784\") " pod="openshift-dns/dns-default-dgcqn" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.602859 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a1546392-4a69-4b12-8d7e-97450b73b7ca-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rrhjv\" (UID: \"a1546392-4a69-4b12-8d7e-97450b73b7ca\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrhjv" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.602880 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qcw5\" (UniqueName: \"kubernetes.io/projected/bf1f11c8-17b8-49b7-b12d-92891f478222-kube-api-access-9qcw5\") pod \"cluster-image-registry-operator-dc59b4c8b-7p5jt\" (UID: \"bf1f11c8-17b8-49b7-b12d-92891f478222\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7p5jt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.602899 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2wtm\" (UniqueName: \"kubernetes.io/projected/0c1af7db-aa80-4cb0-a9cb-5afdf677f28c-kube-api-access-v2wtm\") pod \"cluster-samples-operator-665b6dd947-4w5bf\" (UID: \"0c1af7db-aa80-4cb0-a9cb-5afdf677f28c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4w5bf" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.602959 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/69008ed1-f3e5-400d-852f-adbcd94199f6-audit-dir\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.602980 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zchmc\" (UniqueName: \"kubernetes.io/projected/550752e4-a1d9-46f4-9118-9e9919b2fe6b-kube-api-access-zchmc\") pod \"service-ca-operator-777779d784-cjzsq\" (UID: \"550752e4-a1d9-46f4-9118-9e9919b2fe6b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjzsq" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.603011 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f9b8f57e-00b9-4355-ace2-0319d320d208-webhook-cert\") pod \"packageserver-d55dfcdfc-mr7cp\" (UID: \"f9b8f57e-00b9-4355-ace2-0319d320d208\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mr7cp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.603040 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/abbeffe1-cfd5-4476-9a8e-2ab5b4869444-srv-cert\") pod \"catalog-operator-68c6474976-nn46h\" (UID: \"abbeffe1-cfd5-4476-9a8e-2ab5b4869444\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nn46h" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.603047 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/69008ed1-f3e5-400d-852f-adbcd94199f6-audit-policies\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.603065 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0c1af7db-aa80-4cb0-a9cb-5afdf677f28c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4w5bf\" (UID: \"0c1af7db-aa80-4cb0-a9cb-5afdf677f28c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4w5bf" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.603097 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73de6ef2-e139-4185-9f56-9db885734ffe-trusted-ca\") pod \"ingress-operator-5b745b69d9-58fcz\" (UID: \"73de6ef2-e139-4185-9f56-9db885734ffe\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-58fcz" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.603124 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgb6k\" (UniqueName: \"kubernetes.io/projected/94291fa4-24a5-499e-8143-89c8784d9284-kube-api-access-hgb6k\") pod \"downloads-7954f5f757-7l927\" (UID: \"94291fa4-24a5-499e-8143-89c8784d9284\") " pod="openshift-console/downloads-7954f5f757-7l927" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.603157 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ddc2e6b7-5582-4579-bf2c-ed165b74c91a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vmwvg\" (UID: \"ddc2e6b7-5582-4579-bf2c-ed165b74c91a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vmwvg" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.603177 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81c2f96b-55e0-483b-b72c-df7e156e9218-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6cghs\" (UID: \"81c2f96b-55e0-483b-b72c-df7e156e9218\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cghs" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.603197 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjhr5\" (UniqueName: \"kubernetes.io/projected/beb6f283-75cb-4184-b985-4e6c095feca1-kube-api-access-mjhr5\") pod \"multus-admission-controller-857f4d67dd-ddghz\" (UID: \"beb6f283-75cb-4184-b985-4e6c095feca1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ddghz" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.603221 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05d74105-0ecd-41ac-9001-8b21b0fd6ba4-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-l9nqd\" (UID: \"05d74105-0ecd-41ac-9001-8b21b0fd6ba4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l9nqd" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.603245 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7lk4\" (UniqueName: \"kubernetes.io/projected/85f05bd5-ff83-4d29-9531-ab3499088095-kube-api-access-x7lk4\") pod \"router-default-5444994796-h9b2g\" (UID: \"85f05bd5-ff83-4d29-9531-ab3499088095\") " pod="openshift-ingress/router-default-5444994796-h9b2g" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.603277 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/149e3000-35d7-47bd-83f0-00ab5e0736c2-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-q7j7b\" (UID: \"149e3000-35d7-47bd-83f0-00ab5e0736c2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q7j7b" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.603325 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/85f05bd5-ff83-4d29-9531-ab3499088095-metrics-certs\") pod \"router-default-5444994796-h9b2g\" (UID: \"85f05bd5-ff83-4d29-9531-ab3499088095\") " pod="openshift-ingress/router-default-5444994796-h9b2g" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.603356 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/54a246a2-f674-4735-b295-b56699ece95b-machine-approver-tls\") pod \"machine-approver-56656f9798-962cr\" (UID: \"54a246a2-f674-4735-b295-b56699ece95b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-962cr" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.603379 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/17a72e73-4d54-4a29-a85a-ecb1aff30d10-srv-cert\") pod \"olm-operator-6b444d44fb-k662z\" (UID: \"17a72e73-4d54-4a29-a85a-ecb1aff30d10\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k662z" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.603399 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vxn2\" (UniqueName: \"kubernetes.io/projected/437b5573-a342-4383-ba60-be0e3ccba839-kube-api-access-9vxn2\") pod \"machine-config-server-c9qdp\" (UID: \"437b5573-a342-4383-ba60-be0e3ccba839\") " pod="openshift-machine-config-operator/machine-config-server-c9qdp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.603431 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a6d331bd-2db3-4319-9f5c-db56d408d9e3-etcd-client\") pod \"apiserver-76f77b778f-6rlxp\" (UID: \"a6d331bd-2db3-4319-9f5c-db56d408d9e3\") " pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.603454 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.603475 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/437b5573-a342-4383-ba60-be0e3ccba839-certs\") pod \"machine-config-server-c9qdp\" (UID: \"437b5573-a342-4383-ba60-be0e3ccba839\") " pod="openshift-machine-config-operator/machine-config-server-c9qdp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.603499 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/033cb12f-278f-431a-8104-519db9a3152f-signing-key\") pod \"service-ca-9c57cc56f-gz7kg\" (UID: \"033cb12f-278f-431a-8104-519db9a3152f\") " pod="openshift-service-ca/service-ca-9c57cc56f-gz7kg" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.603547 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/831dc87e-8e14-43d3-a36e-dc7679041ae5-trusted-ca\") pod \"console-operator-58897d9998-4pbj2\" (UID: \"831dc87e-8e14-43d3-a36e-dc7679041ae5\") " pod="openshift-console-operator/console-operator-58897d9998-4pbj2" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.603580 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c475g\" (UniqueName: \"kubernetes.io/projected/77a97acb-2908-48fb-8bcd-0647f3e90160-kube-api-access-c475g\") pod \"machine-api-operator-5694c8668f-7xb2b\" (UID: \"77a97acb-2908-48fb-8bcd-0647f3e90160\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7xb2b" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.603604 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/85f05bd5-ff83-4d29-9531-ab3499088095-stats-auth\") pod \"router-default-5444994796-h9b2g\" (UID: \"85f05bd5-ff83-4d29-9531-ab3499088095\") " pod="openshift-ingress/router-default-5444994796-h9b2g" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.603631 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kkkd\" (UniqueName: \"kubernetes.io/projected/129a0b30-7132-4e3c-ab84-208cae7cb2f2-kube-api-access-6kkkd\") pod \"machine-config-operator-74547568cd-zzxln\" (UID: \"129a0b30-7132-4e3c-ab84-208cae7cb2f2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zzxln" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.603660 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2wnd\" (UniqueName: \"kubernetes.io/projected/831dc87e-8e14-43d3-a36e-dc7679041ae5-kube-api-access-d2wnd\") pod \"console-operator-58897d9998-4pbj2\" (UID: \"831dc87e-8e14-43d3-a36e-dc7679041ae5\") " pod="openshift-console-operator/console-operator-58897d9998-4pbj2" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.603685 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/17a72e73-4d54-4a29-a85a-ecb1aff30d10-profile-collector-cert\") pod \"olm-operator-6b444d44fb-k662z\" (UID: \"17a72e73-4d54-4a29-a85a-ecb1aff30d10\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k662z" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.603709 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d37efbf-d18f-486b-9b43-bc4d181af4ca-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wvttb\" (UID: \"2d37efbf-d18f-486b-9b43-bc4d181af4ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-wvttb" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.603734 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b28404ed-2e71-4b3f-9140-35ee89dbc8f2-console-serving-cert\") pod \"console-f9d7485db-6qgnn\" (UID: \"b28404ed-2e71-4b3f-9140-35ee89dbc8f2\") " pod="openshift-console/console-f9d7485db-6qgnn" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.603758 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f9b8f57e-00b9-4355-ace2-0319d320d208-apiservice-cert\") pod \"packageserver-d55dfcdfc-mr7cp\" (UID: \"f9b8f57e-00b9-4355-ace2-0319d320d208\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mr7cp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.603788 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/359660cd-b412-4640-bedf-993e976e7b3c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-88lkr\" (UID: \"359660cd-b412-4640-bedf-993e976e7b3c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-88lkr" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.603813 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.603840 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdxxq\" (UniqueName: \"kubernetes.io/projected/73de6ef2-e139-4185-9f56-9db885734ffe-kube-api-access-hdxxq\") pod \"ingress-operator-5b745b69d9-58fcz\" (UID: \"73de6ef2-e139-4185-9f56-9db885734ffe\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-58fcz" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.603864 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/129a0b30-7132-4e3c-ab84-208cae7cb2f2-auth-proxy-config\") pod \"machine-config-operator-74547568cd-zzxln\" (UID: \"129a0b30-7132-4e3c-ab84-208cae7cb2f2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zzxln" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.603891 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5c244eff-aada-44f3-b250-96878a3400c4-etcd-ca\") pod \"etcd-operator-b45778765-27vjc\" (UID: \"5c244eff-aada-44f3-b250-96878a3400c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-27vjc" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.603912 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c244eff-aada-44f3-b250-96878a3400c4-serving-cert\") pod \"etcd-operator-b45778765-27vjc\" (UID: \"5c244eff-aada-44f3-b250-96878a3400c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-27vjc" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.603933 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7-secret-volume\") pod \"collect-profiles-29490780-8q6q4\" (UID: \"7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-8q6q4" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.603955 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/129a0b30-7132-4e3c-ab84-208cae7cb2f2-images\") pod \"machine-config-operator-74547568cd-zzxln\" (UID: \"129a0b30-7132-4e3c-ab84-208cae7cb2f2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zzxln" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.603985 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c8657575-cd22-4ebc-ae9d-4174366985d3-registration-dir\") pod \"csi-hostpathplugin-vfm8t\" (UID: \"c8657575-cd22-4ebc-ae9d-4174366985d3\") " pod="hostpath-provisioner/csi-hostpathplugin-vfm8t" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.604014 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/81c2f96b-55e0-483b-b72c-df7e156e9218-encryption-config\") pod \"apiserver-7bbb656c7d-6cghs\" (UID: \"81c2f96b-55e0-483b-b72c-df7e156e9218\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cghs" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.604038 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxgfj\" (UniqueName: \"kubernetes.io/projected/7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7-kube-api-access-kxgfj\") pod \"collect-profiles-29490780-8q6q4\" (UID: \"7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-8q6q4" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.604062 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.604086 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c8657575-cd22-4ebc-ae9d-4174366985d3-socket-dir\") pod \"csi-hostpathplugin-vfm8t\" (UID: \"c8657575-cd22-4ebc-ae9d-4174366985d3\") " pod="hostpath-provisioner/csi-hostpathplugin-vfm8t" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.604103 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fe6baed-ab97-4d8a-8be2-6f00f9698136-config\") pod \"route-controller-manager-6576b87f9c-fpqvc\" (UID: \"5fe6baed-ab97-4d8a-8be2-6f00f9698136\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fpqvc" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.604112 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a6d331bd-2db3-4319-9f5c-db56d408d9e3-image-import-ca\") pod \"apiserver-76f77b778f-6rlxp\" (UID: \"a6d331bd-2db3-4319-9f5c-db56d408d9e3\") " pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.604108 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a6d331bd-2db3-4319-9f5c-db56d408d9e3-encryption-config\") pod \"apiserver-76f77b778f-6rlxp\" (UID: \"a6d331bd-2db3-4319-9f5c-db56d408d9e3\") " pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.604177 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf1f11c8-17b8-49b7-b12d-92891f478222-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-7p5jt\" (UID: \"bf1f11c8-17b8-49b7-b12d-92891f478222\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7p5jt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.604352 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/69008ed1-f3e5-400d-852f-adbcd94199f6-audit-dir\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.604698 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.604707 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81c2f96b-55e0-483b-b72c-df7e156e9218-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6cghs\" (UID: \"81c2f96b-55e0-483b-b72c-df7e156e9218\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cghs" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.605064 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b28404ed-2e71-4b3f-9140-35ee89dbc8f2-console-config\") pod \"console-f9d7485db-6qgnn\" (UID: \"b28404ed-2e71-4b3f-9140-35ee89dbc8f2\") " pod="openshift-console/console-f9d7485db-6qgnn" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.605478 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1afc0f4c-e02d-4a70-aaba-e761e8c04eee-serving-cert\") pod \"controller-manager-879f6c89f-lndnt\" (UID: \"1afc0f4c-e02d-4a70-aaba-e761e8c04eee\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lndnt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.605519 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5c244eff-aada-44f3-b250-96878a3400c4-etcd-ca\") pod \"etcd-operator-b45778765-27vjc\" (UID: \"5c244eff-aada-44f3-b250-96878a3400c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-27vjc" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.605632 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/81c2f96b-55e0-483b-b72c-df7e156e9218-audit-dir\") pod \"apiserver-7bbb656c7d-6cghs\" (UID: \"81c2f96b-55e0-483b-b72c-df7e156e9218\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cghs" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.605677 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6d331bd-2db3-4319-9f5c-db56d408d9e3-trusted-ca-bundle\") pod \"apiserver-76f77b778f-6rlxp\" (UID: \"a6d331bd-2db3-4319-9f5c-db56d408d9e3\") " pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.606400 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/359660cd-b412-4640-bedf-993e976e7b3c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-88lkr\" (UID: \"359660cd-b412-4640-bedf-993e976e7b3c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-88lkr" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.606934 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5c244eff-aada-44f3-b250-96878a3400c4-etcd-client\") pod \"etcd-operator-b45778765-27vjc\" (UID: \"5c244eff-aada-44f3-b250-96878a3400c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-27vjc" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.607502 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.607783 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.608189 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/831dc87e-8e14-43d3-a36e-dc7679041ae5-serving-cert\") pod \"console-operator-58897d9998-4pbj2\" (UID: \"831dc87e-8e14-43d3-a36e-dc7679041ae5\") " pod="openshift-console-operator/console-operator-58897d9998-4pbj2" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.608226 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwmjs\" (UniqueName: \"kubernetes.io/projected/a1546392-4a69-4b12-8d7e-97450b73b7ca-kube-api-access-pwmjs\") pod \"control-plane-machine-set-operator-78cbb6b69f-rrhjv\" (UID: \"a1546392-4a69-4b12-8d7e-97450b73b7ca\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrhjv" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.608325 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/359660cd-b412-4640-bedf-993e976e7b3c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-88lkr\" (UID: \"359660cd-b412-4640-bedf-993e976e7b3c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-88lkr" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.608392 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a6d331bd-2db3-4319-9f5c-db56d408d9e3-audit\") pod \"apiserver-76f77b778f-6rlxp\" (UID: \"a6d331bd-2db3-4319-9f5c-db56d408d9e3\") " pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.608430 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/81c2f96b-55e0-483b-b72c-df7e156e9218-etcd-client\") pod \"apiserver-7bbb656c7d-6cghs\" (UID: \"81c2f96b-55e0-483b-b72c-df7e156e9218\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cghs" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.608436 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/149e3000-35d7-47bd-83f0-00ab5e0736c2-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-q7j7b\" (UID: \"149e3000-35d7-47bd-83f0-00ab5e0736c2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q7j7b" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.608451 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fe6baed-ab97-4d8a-8be2-6f00f9698136-serving-cert\") pod \"route-controller-manager-6576b87f9c-fpqvc\" (UID: \"5fe6baed-ab97-4d8a-8be2-6f00f9698136\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fpqvc" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.608508 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b28404ed-2e71-4b3f-9140-35ee89dbc8f2-console-serving-cert\") pod \"console-f9d7485db-6qgnn\" (UID: \"b28404ed-2e71-4b3f-9140-35ee89dbc8f2\") " pod="openshift-console/console-f9d7485db-6qgnn" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.608581 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/85f05bd5-ff83-4d29-9531-ab3499088095-default-certificate\") pod \"router-default-5444994796-h9b2g\" (UID: \"85f05bd5-ff83-4d29-9531-ab3499088095\") " pod="openshift-ingress/router-default-5444994796-h9b2g" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.608959 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/831dc87e-8e14-43d3-a36e-dc7679041ae5-trusted-ca\") pod \"console-operator-58897d9998-4pbj2\" (UID: \"831dc87e-8e14-43d3-a36e-dc7679041ae5\") " pod="openshift-console-operator/console-operator-58897d9998-4pbj2" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.609213 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a6d331bd-2db3-4319-9f5c-db56d408d9e3-audit\") pod \"apiserver-76f77b778f-6rlxp\" (UID: \"a6d331bd-2db3-4319-9f5c-db56d408d9e3\") " pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.609261 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5fe6baed-ab97-4d8a-8be2-6f00f9698136-client-ca\") pod \"route-controller-manager-6576b87f9c-fpqvc\" (UID: \"5fe6baed-ab97-4d8a-8be2-6f00f9698136\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fpqvc" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.609364 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/54a246a2-f674-4735-b295-b56699ece95b-machine-approver-tls\") pod \"machine-approver-56656f9798-962cr\" (UID: \"54a246a2-f674-4735-b295-b56699ece95b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-962cr" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.609862 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5fe6baed-ab97-4d8a-8be2-6f00f9698136-client-ca\") pod \"route-controller-manager-6576b87f9c-fpqvc\" (UID: \"5fe6baed-ab97-4d8a-8be2-6f00f9698136\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fpqvc" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.609871 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81c2f96b-55e0-483b-b72c-df7e156e9218-serving-cert\") pod \"apiserver-7bbb656c7d-6cghs\" (UID: \"81c2f96b-55e0-483b-b72c-df7e156e9218\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cghs" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.610308 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.610336 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.610618 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf1f11c8-17b8-49b7-b12d-92891f478222-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-7p5jt\" (UID: \"bf1f11c8-17b8-49b7-b12d-92891f478222\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7p5jt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.610854 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/149e3000-35d7-47bd-83f0-00ab5e0736c2-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-q7j7b\" (UID: \"149e3000-35d7-47bd-83f0-00ab5e0736c2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q7j7b" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.610910 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0c1af7db-aa80-4cb0-a9cb-5afdf677f28c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4w5bf\" (UID: \"0c1af7db-aa80-4cb0-a9cb-5afdf677f28c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4w5bf" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.611089 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.611939 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/81c2f96b-55e0-483b-b72c-df7e156e9218-etcd-client\") pod \"apiserver-7bbb656c7d-6cghs\" (UID: \"81c2f96b-55e0-483b-b72c-df7e156e9218\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cghs" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.612040 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.612134 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a6d331bd-2db3-4319-9f5c-db56d408d9e3-etcd-client\") pod \"apiserver-76f77b778f-6rlxp\" (UID: \"a6d331bd-2db3-4319-9f5c-db56d408d9e3\") " pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.612298 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/831dc87e-8e14-43d3-a36e-dc7679041ae5-serving-cert\") pod \"console-operator-58897d9998-4pbj2\" (UID: \"831dc87e-8e14-43d3-a36e-dc7679041ae5\") " pod="openshift-console-operator/console-operator-58897d9998-4pbj2" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.612488 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c244eff-aada-44f3-b250-96878a3400c4-serving-cert\") pod \"etcd-operator-b45778765-27vjc\" (UID: \"5c244eff-aada-44f3-b250-96878a3400c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-27vjc" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.612639 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/81c2f96b-55e0-483b-b72c-df7e156e9218-encryption-config\") pod \"apiserver-7bbb656c7d-6cghs\" (UID: \"81c2f96b-55e0-483b-b72c-df7e156e9218\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cghs" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.612648 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a6d331bd-2db3-4319-9f5c-db56d408d9e3-encryption-config\") pod \"apiserver-76f77b778f-6rlxp\" (UID: \"a6d331bd-2db3-4319-9f5c-db56d408d9e3\") " pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.614346 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fe6baed-ab97-4d8a-8be2-6f00f9698136-serving-cert\") pod \"route-controller-manager-6576b87f9c-fpqvc\" (UID: \"5fe6baed-ab97-4d8a-8be2-6f00f9698136\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fpqvc" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.616607 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.636587 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.640299 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b28404ed-2e71-4b3f-9140-35ee89dbc8f2-console-oauth-config\") pod \"console-f9d7485db-6qgnn\" (UID: \"b28404ed-2e71-4b3f-9140-35ee89dbc8f2\") " pod="openshift-console/console-f9d7485db-6qgnn" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.657294 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.679594 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.696220 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.710260 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ac10f013-cd1f-47e0-8f1c-5ff4e6e75784-metrics-tls\") pod \"dns-default-dgcqn\" (UID: \"ac10f013-cd1f-47e0-8f1c-5ff4e6e75784\") " pod="openshift-dns/dns-default-dgcqn" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.710493 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr9jp\" (UniqueName: \"kubernetes.io/projected/004316da-16cd-49ab-b14d-282c28da6fad-kube-api-access-nr9jp\") pod \"package-server-manager-789f6589d5-8m4l6\" (UID: \"004316da-16cd-49ab-b14d-282c28da6fad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8m4l6" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.710719 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/004316da-16cd-49ab-b14d-282c28da6fad-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-8m4l6\" (UID: \"004316da-16cd-49ab-b14d-282c28da6fad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8m4l6" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.710814 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/550752e4-a1d9-46f4-9118-9e9919b2fe6b-config\") pod \"service-ca-operator-777779d784-cjzsq\" (UID: \"550752e4-a1d9-46f4-9118-9e9919b2fe6b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjzsq" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.710930 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73de6ef2-e139-4185-9f56-9db885734ffe-bound-sa-token\") pod \"ingress-operator-5b745b69d9-58fcz\" (UID: \"73de6ef2-e139-4185-9f56-9db885734ffe\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-58fcz" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.711040 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/73de6ef2-e139-4185-9f56-9db885734ffe-metrics-tls\") pod \"ingress-operator-5b745b69d9-58fcz\" (UID: \"73de6ef2-e139-4185-9f56-9db885734ffe\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-58fcz" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.711119 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a4d83db5-776f-4e95-a6fa-b194344f9819-proxy-tls\") pod \"machine-config-controller-84d6567774-2sfhr\" (UID: \"a4d83db5-776f-4e95-a6fa-b194344f9819\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2sfhr" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.711194 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v8bp\" (UniqueName: \"kubernetes.io/projected/ac10f013-cd1f-47e0-8f1c-5ff4e6e75784-kube-api-access-6v8bp\") pod \"dns-default-dgcqn\" (UID: \"ac10f013-cd1f-47e0-8f1c-5ff4e6e75784\") " pod="openshift-dns/dns-default-dgcqn" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.711262 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a1546392-4a69-4b12-8d7e-97450b73b7ca-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rrhjv\" (UID: \"a1546392-4a69-4b12-8d7e-97450b73b7ca\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrhjv" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.711355 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/abbeffe1-cfd5-4476-9a8e-2ab5b4869444-srv-cert\") pod \"catalog-operator-68c6474976-nn46h\" (UID: \"abbeffe1-cfd5-4476-9a8e-2ab5b4869444\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nn46h" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.711466 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zchmc\" (UniqueName: \"kubernetes.io/projected/550752e4-a1d9-46f4-9118-9e9919b2fe6b-kube-api-access-zchmc\") pod \"service-ca-operator-777779d784-cjzsq\" (UID: \"550752e4-a1d9-46f4-9118-9e9919b2fe6b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjzsq" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.711602 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73de6ef2-e139-4185-9f56-9db885734ffe-trusted-ca\") pod \"ingress-operator-5b745b69d9-58fcz\" (UID: \"73de6ef2-e139-4185-9f56-9db885734ffe\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-58fcz" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.711733 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/17a72e73-4d54-4a29-a85a-ecb1aff30d10-srv-cert\") pod \"olm-operator-6b444d44fb-k662z\" (UID: \"17a72e73-4d54-4a29-a85a-ecb1aff30d10\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k662z" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.711831 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vxn2\" (UniqueName: \"kubernetes.io/projected/437b5573-a342-4383-ba60-be0e3ccba839-kube-api-access-9vxn2\") pod \"machine-config-server-c9qdp\" (UID: \"437b5573-a342-4383-ba60-be0e3ccba839\") " pod="openshift-machine-config-operator/machine-config-server-c9qdp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.711951 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/437b5573-a342-4383-ba60-be0e3ccba839-certs\") pod \"machine-config-server-c9qdp\" (UID: \"437b5573-a342-4383-ba60-be0e3ccba839\") " pod="openshift-machine-config-operator/machine-config-server-c9qdp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.712069 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kkkd\" (UniqueName: \"kubernetes.io/projected/129a0b30-7132-4e3c-ab84-208cae7cb2f2-kube-api-access-6kkkd\") pod \"machine-config-operator-74547568cd-zzxln\" (UID: \"129a0b30-7132-4e3c-ab84-208cae7cb2f2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zzxln" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.712196 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/17a72e73-4d54-4a29-a85a-ecb1aff30d10-profile-collector-cert\") pod \"olm-operator-6b444d44fb-k662z\" (UID: \"17a72e73-4d54-4a29-a85a-ecb1aff30d10\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k662z" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.712306 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d37efbf-d18f-486b-9b43-bc4d181af4ca-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wvttb\" (UID: \"2d37efbf-d18f-486b-9b43-bc4d181af4ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-wvttb" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.712412 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdxxq\" (UniqueName: \"kubernetes.io/projected/73de6ef2-e139-4185-9f56-9db885734ffe-kube-api-access-hdxxq\") pod \"ingress-operator-5b745b69d9-58fcz\" (UID: \"73de6ef2-e139-4185-9f56-9db885734ffe\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-58fcz" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.712517 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/129a0b30-7132-4e3c-ab84-208cae7cb2f2-auth-proxy-config\") pod \"machine-config-operator-74547568cd-zzxln\" (UID: \"129a0b30-7132-4e3c-ab84-208cae7cb2f2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zzxln" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.712677 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7-secret-volume\") pod \"collect-profiles-29490780-8q6q4\" (UID: \"7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-8q6q4" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.712873 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/129a0b30-7132-4e3c-ab84-208cae7cb2f2-images\") pod \"machine-config-operator-74547568cd-zzxln\" (UID: \"129a0b30-7132-4e3c-ab84-208cae7cb2f2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zzxln" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.714433 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxgfj\" (UniqueName: \"kubernetes.io/projected/7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7-kube-api-access-kxgfj\") pod \"collect-profiles-29490780-8q6q4\" (UID: \"7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-8q6q4" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.714639 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c8657575-cd22-4ebc-ae9d-4174366985d3-registration-dir\") pod \"csi-hostpathplugin-vfm8t\" (UID: \"c8657575-cd22-4ebc-ae9d-4174366985d3\") " pod="hostpath-provisioner/csi-hostpathplugin-vfm8t" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.714748 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c8657575-cd22-4ebc-ae9d-4174366985d3-socket-dir\") pod \"csi-hostpathplugin-vfm8t\" (UID: \"c8657575-cd22-4ebc-ae9d-4174366985d3\") " pod="hostpath-provisioner/csi-hostpathplugin-vfm8t" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.715048 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwmjs\" (UniqueName: \"kubernetes.io/projected/a1546392-4a69-4b12-8d7e-97450b73b7ca-kube-api-access-pwmjs\") pod \"control-plane-machine-set-operator-78cbb6b69f-rrhjv\" (UID: \"a1546392-4a69-4b12-8d7e-97450b73b7ca\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrhjv" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.713154 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/129a0b30-7132-4e3c-ab84-208cae7cb2f2-auth-proxy-config\") pod \"machine-config-operator-74547568cd-zzxln\" (UID: \"129a0b30-7132-4e3c-ab84-208cae7cb2f2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zzxln" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.714946 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c8657575-cd22-4ebc-ae9d-4174366985d3-registration-dir\") pod \"csi-hostpathplugin-vfm8t\" (UID: \"c8657575-cd22-4ebc-ae9d-4174366985d3\") " pod="hostpath-provisioner/csi-hostpathplugin-vfm8t" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.714987 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c8657575-cd22-4ebc-ae9d-4174366985d3-socket-dir\") pod \"csi-hostpathplugin-vfm8t\" (UID: \"c8657575-cd22-4ebc-ae9d-4174366985d3\") " pod="hostpath-provisioner/csi-hostpathplugin-vfm8t" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.714065 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/73de6ef2-e139-4185-9f56-9db885734ffe-metrics-tls\") pod \"ingress-operator-5b745b69d9-58fcz\" (UID: \"73de6ef2-e139-4185-9f56-9db885734ffe\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-58fcz" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.715700 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/abbeffe1-cfd5-4476-9a8e-2ab5b4869444-profile-collector-cert\") pod \"catalog-operator-68c6474976-nn46h\" (UID: \"abbeffe1-cfd5-4476-9a8e-2ab5b4869444\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nn46h" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.715825 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wxzv\" (UniqueName: \"kubernetes.io/projected/17a72e73-4d54-4a29-a85a-ecb1aff30d10-kube-api-access-9wxzv\") pod \"olm-operator-6b444d44fb-k662z\" (UID: \"17a72e73-4d54-4a29-a85a-ecb1aff30d10\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k662z" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.715936 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/437b5573-a342-4383-ba60-be0e3ccba839-node-bootstrap-token\") pod \"machine-config-server-c9qdp\" (UID: \"437b5573-a342-4383-ba60-be0e3ccba839\") " pod="openshift-machine-config-operator/machine-config-server-c9qdp" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.716081 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7-config-volume\") pod \"collect-profiles-29490780-8q6q4\" (UID: \"7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-8q6q4" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.716212 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6clr\" (UniqueName: \"kubernetes.io/projected/37a77f41-5dbf-4842-9e77-83dc22b50f4a-kube-api-access-w6clr\") pod \"migrator-59844c95c7-qdmxz\" (UID: \"37a77f41-5dbf-4842-9e77-83dc22b50f4a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdmxz" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.716327 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8qsf\" (UniqueName: \"kubernetes.io/projected/2d37efbf-d18f-486b-9b43-bc4d181af4ca-kube-api-access-b8qsf\") pod \"marketplace-operator-79b997595-wvttb\" (UID: \"2d37efbf-d18f-486b-9b43-bc4d181af4ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-wvttb" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.716435 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc4dg\" (UniqueName: \"kubernetes.io/projected/abbeffe1-cfd5-4476-9a8e-2ab5b4869444-kube-api-access-hc4dg\") pod \"catalog-operator-68c6474976-nn46h\" (UID: \"abbeffe1-cfd5-4476-9a8e-2ab5b4869444\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nn46h" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.716642 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c8657575-cd22-4ebc-ae9d-4174366985d3-csi-data-dir\") pod \"csi-hostpathplugin-vfm8t\" (UID: \"c8657575-cd22-4ebc-ae9d-4174366985d3\") " pod="hostpath-provisioner/csi-hostpathplugin-vfm8t" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.716884 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgzl5\" (UniqueName: \"kubernetes.io/projected/c8657575-cd22-4ebc-ae9d-4174366985d3-kube-api-access-fgzl5\") pod \"csi-hostpathplugin-vfm8t\" (UID: \"c8657575-cd22-4ebc-ae9d-4174366985d3\") " pod="hostpath-provisioner/csi-hostpathplugin-vfm8t" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.716847 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c8657575-cd22-4ebc-ae9d-4174366985d3-csi-data-dir\") pod \"csi-hostpathplugin-vfm8t\" (UID: \"c8657575-cd22-4ebc-ae9d-4174366985d3\") " pod="hostpath-provisioner/csi-hostpathplugin-vfm8t" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.718041 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a4d83db5-776f-4e95-a6fa-b194344f9819-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-2sfhr\" (UID: \"a4d83db5-776f-4e95-a6fa-b194344f9819\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2sfhr" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.717094 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a4d83db5-776f-4e95-a6fa-b194344f9819-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-2sfhr\" (UID: \"a4d83db5-776f-4e95-a6fa-b194344f9819\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2sfhr" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.721436 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4z5z\" (UniqueName: \"kubernetes.io/projected/05d74105-0ecd-41ac-9001-8b21b0fd6ba4-kube-api-access-m4z5z\") pod \"openshift-controller-manager-operator-756b6f6bc6-l9nqd\" (UID: \"05d74105-0ecd-41ac-9001-8b21b0fd6ba4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l9nqd" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.721588 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2d37efbf-d18f-486b-9b43-bc4d181af4ca-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wvttb\" (UID: \"2d37efbf-d18f-486b-9b43-bc4d181af4ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-wvttb" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.721705 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/550752e4-a1d9-46f4-9118-9e9919b2fe6b-serving-cert\") pod \"service-ca-operator-777779d784-cjzsq\" (UID: \"550752e4-a1d9-46f4-9118-9e9919b2fe6b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjzsq" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.721808 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac10f013-cd1f-47e0-8f1c-5ff4e6e75784-config-volume\") pod \"dns-default-dgcqn\" (UID: \"ac10f013-cd1f-47e0-8f1c-5ff4e6e75784\") " pod="openshift-dns/dns-default-dgcqn" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.721962 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c8657575-cd22-4ebc-ae9d-4174366985d3-mountpoint-dir\") pod \"csi-hostpathplugin-vfm8t\" (UID: \"c8657575-cd22-4ebc-ae9d-4174366985d3\") " pod="hostpath-provisioner/csi-hostpathplugin-vfm8t" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.722079 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c8657575-cd22-4ebc-ae9d-4174366985d3-plugins-dir\") pod \"csi-hostpathplugin-vfm8t\" (UID: \"c8657575-cd22-4ebc-ae9d-4174366985d3\") " pod="hostpath-provisioner/csi-hostpathplugin-vfm8t" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.722175 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c8657575-cd22-4ebc-ae9d-4174366985d3-mountpoint-dir\") pod \"csi-hostpathplugin-vfm8t\" (UID: \"c8657575-cd22-4ebc-ae9d-4174366985d3\") " pod="hostpath-provisioner/csi-hostpathplugin-vfm8t" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.722235 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2hpl\" (UniqueName: \"kubernetes.io/projected/a4d83db5-776f-4e95-a6fa-b194344f9819-kube-api-access-t2hpl\") pod \"machine-config-controller-84d6567774-2sfhr\" (UID: \"a4d83db5-776f-4e95-a6fa-b194344f9819\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2sfhr" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.722446 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/129a0b30-7132-4e3c-ab84-208cae7cb2f2-proxy-tls\") pod \"machine-config-operator-74547568cd-zzxln\" (UID: \"129a0b30-7132-4e3c-ab84-208cae7cb2f2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zzxln" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.722350 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c8657575-cd22-4ebc-ae9d-4174366985d3-plugins-dir\") pod \"csi-hostpathplugin-vfm8t\" (UID: \"c8657575-cd22-4ebc-ae9d-4174366985d3\") " pod="hostpath-provisioner/csi-hostpathplugin-vfm8t" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.724931 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.732880 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73de6ef2-e139-4185-9f56-9db885734ffe-trusted-ca\") pod \"ingress-operator-5b745b69d9-58fcz\" (UID: \"73de6ef2-e139-4185-9f56-9db885734ffe\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-58fcz" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.736905 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.756780 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.777133 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.796521 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.802291 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/85f05bd5-ff83-4d29-9531-ab3499088095-default-certificate\") pod \"router-default-5444994796-h9b2g\" (UID: \"85f05bd5-ff83-4d29-9531-ab3499088095\") " pod="openshift-ingress/router-default-5444994796-h9b2g" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.816581 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.827113 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/85f05bd5-ff83-4d29-9531-ab3499088095-stats-auth\") pod \"router-default-5444994796-h9b2g\" (UID: \"85f05bd5-ff83-4d29-9531-ab3499088095\") " pod="openshift-ingress/router-default-5444994796-h9b2g" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.836552 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.849456 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/85f05bd5-ff83-4d29-9531-ab3499088095-metrics-certs\") pod \"router-default-5444994796-h9b2g\" (UID: \"85f05bd5-ff83-4d29-9531-ab3499088095\") " pod="openshift-ingress/router-default-5444994796-h9b2g" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.856684 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.865023 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85f05bd5-ff83-4d29-9531-ab3499088095-service-ca-bundle\") pod \"router-default-5444994796-h9b2g\" (UID: \"85f05bd5-ff83-4d29-9531-ab3499088095\") " pod="openshift-ingress/router-default-5444994796-h9b2g" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.876736 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.896810 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.916740 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.937375 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.949321 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa11789e-7a2a-4dbf-85ca-c20a9d64a1f4-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6snv6\" (UID: \"fa11789e-7a2a-4dbf-85ca-c20a9d64a1f4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6snv6" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.956702 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.958190 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa11789e-7a2a-4dbf-85ca-c20a9d64a1f4-config\") pod \"kube-controller-manager-operator-78b949d7b-6snv6\" (UID: \"fa11789e-7a2a-4dbf-85ca-c20a9d64a1f4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6snv6" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.978979 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 17:00:51 crc kubenswrapper[4856]: I0126 17:00:51.997323 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.016853 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.027780 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42c0e428-821f-45a1-85a7-54ebdb81ef1c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cl895\" (UID: \"42c0e428-821f-45a1-85a7-54ebdb81ef1c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cl895" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.037350 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.047331 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42c0e428-821f-45a1-85a7-54ebdb81ef1c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cl895\" (UID: \"42c0e428-821f-45a1-85a7-54ebdb81ef1c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cl895" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.056679 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.077451 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.084269 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a4d83db5-776f-4e95-a6fa-b194344f9819-proxy-tls\") pod \"machine-config-controller-84d6567774-2sfhr\" (UID: \"a4d83db5-776f-4e95-a6fa-b194344f9819\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2sfhr" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.097199 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.101088 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/beb6f283-75cb-4184-b985-4e6c095feca1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-ddghz\" (UID: \"beb6f283-75cb-4184-b985-4e6c095feca1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ddghz" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.117283 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.136827 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.146035 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/129a0b30-7132-4e3c-ab84-208cae7cb2f2-proxy-tls\") pod \"machine-config-operator-74547568cd-zzxln\" (UID: \"129a0b30-7132-4e3c-ab84-208cae7cb2f2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zzxln" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.156705 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.163992 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/129a0b30-7132-4e3c-ab84-208cae7cb2f2-images\") pod \"machine-config-operator-74547568cd-zzxln\" (UID: \"129a0b30-7132-4e3c-ab84-208cae7cb2f2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zzxln" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.176675 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.197265 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.204693 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/17a72e73-4d54-4a29-a85a-ecb1aff30d10-srv-cert\") pod \"olm-operator-6b444d44fb-k662z\" (UID: \"17a72e73-4d54-4a29-a85a-ecb1aff30d10\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k662z" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.217259 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.226061 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/17a72e73-4d54-4a29-a85a-ecb1aff30d10-profile-collector-cert\") pod \"olm-operator-6b444d44fb-k662z\" (UID: \"17a72e73-4d54-4a29-a85a-ecb1aff30d10\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k662z" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.226255 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7-secret-volume\") pod \"collect-profiles-29490780-8q6q4\" (UID: \"7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-8q6q4" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.228478 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/abbeffe1-cfd5-4476-9a8e-2ab5b4869444-profile-collector-cert\") pod \"catalog-operator-68c6474976-nn46h\" (UID: \"abbeffe1-cfd5-4476-9a8e-2ab5b4869444\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nn46h" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.237796 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.257400 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.276582 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.298671 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.305046 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/abbeffe1-cfd5-4476-9a8e-2ab5b4869444-srv-cert\") pod \"catalog-operator-68c6474976-nn46h\" (UID: \"abbeffe1-cfd5-4476-9a8e-2ab5b4869444\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nn46h" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.316900 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.324701 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/004316da-16cd-49ab-b14d-282c28da6fad-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-8m4l6\" (UID: \"004316da-16cd-49ab-b14d-282c28da6fad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8m4l6" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.337743 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.341048 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f9b8f57e-00b9-4355-ace2-0319d320d208-apiservice-cert\") pod \"packageserver-d55dfcdfc-mr7cp\" (UID: \"f9b8f57e-00b9-4355-ace2-0319d320d208\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mr7cp" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.347504 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f9b8f57e-00b9-4355-ace2-0319d320d208-webhook-cert\") pod \"packageserver-d55dfcdfc-mr7cp\" (UID: \"f9b8f57e-00b9-4355-ace2-0319d320d208\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mr7cp" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.356405 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.364856 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05d74105-0ecd-41ac-9001-8b21b0fd6ba4-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-l9nqd\" (UID: \"05d74105-0ecd-41ac-9001-8b21b0fd6ba4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l9nqd" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.376793 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.394255 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.394350 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.394354 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.394772 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.396223 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.417002 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.436372 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.450738 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05d74105-0ecd-41ac-9001-8b21b0fd6ba4-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-l9nqd\" (UID: \"05d74105-0ecd-41ac-9001-8b21b0fd6ba4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l9nqd" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.456691 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.477346 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.494760 4856 request.go:700] Waited for 1.017759772s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dsigning-key&limit=500&resourceVersion=0 Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.496663 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.501067 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/033cb12f-278f-431a-8104-519db9a3152f-signing-key\") pod \"service-ca-9c57cc56f-gz7kg\" (UID: \"033cb12f-278f-431a-8104-519db9a3152f\") " pod="openshift-service-ca/service-ca-9c57cc56f-gz7kg" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.516987 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.524362 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/033cb12f-278f-431a-8104-519db9a3152f-signing-cabundle\") pod \"service-ca-9c57cc56f-gz7kg\" (UID: \"033cb12f-278f-431a-8104-519db9a3152f\") " pod="openshift-service-ca/service-ca-9c57cc56f-gz7kg" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.537613 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.577439 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.596659 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 17:00:52 crc kubenswrapper[4856]: E0126 17:00:52.598095 4856 secret.go:188] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 26 17:00:52 crc kubenswrapper[4856]: E0126 17:00:52.598214 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ddc2e6b7-5582-4579-bf2c-ed165b74c91a-serving-cert podName:ddc2e6b7-5582-4579-bf2c-ed165b74c91a nodeName:}" failed. No retries permitted until 2026-01-26 17:00:53.098174872 +0000 UTC m=+149.051428853 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ddc2e6b7-5582-4579-bf2c-ed165b74c91a-serving-cert") pod "kube-apiserver-operator-766d6c64bb-vmwvg" (UID: "ddc2e6b7-5582-4579-bf2c-ed165b74c91a") : failed to sync secret cache: timed out waiting for the condition Jan 26 17:00:52 crc kubenswrapper[4856]: E0126 17:00:52.599155 4856 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 26 17:00:52 crc kubenswrapper[4856]: E0126 17:00:52.599193 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ddc2e6b7-5582-4579-bf2c-ed165b74c91a-config podName:ddc2e6b7-5582-4579-bf2c-ed165b74c91a nodeName:}" failed. No retries permitted until 2026-01-26 17:00:53.099183081 +0000 UTC m=+149.052437062 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ddc2e6b7-5582-4579-bf2c-ed165b74c91a-config") pod "kube-apiserver-operator-766d6c64bb-vmwvg" (UID: "ddc2e6b7-5582-4579-bf2c-ed165b74c91a") : failed to sync configmap cache: timed out waiting for the condition Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.632587 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.636859 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.648199 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2d37efbf-d18f-486b-9b43-bc4d181af4ca-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wvttb\" (UID: \"2d37efbf-d18f-486b-9b43-bc4d181af4ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-wvttb" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.664313 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.673996 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d37efbf-d18f-486b-9b43-bc4d181af4ca-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wvttb\" (UID: \"2d37efbf-d18f-486b-9b43-bc4d181af4ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-wvttb" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.677282 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.696624 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.707641 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7-config-volume\") pod \"collect-profiles-29490780-8q6q4\" (UID: \"7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-8q6q4" Jan 26 17:00:52 crc kubenswrapper[4856]: E0126 17:00:52.711584 4856 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Jan 26 17:00:52 crc kubenswrapper[4856]: E0126 17:00:52.711615 4856 secret.go:188] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Jan 26 17:00:52 crc kubenswrapper[4856]: E0126 17:00:52.711583 4856 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 26 17:00:52 crc kubenswrapper[4856]: E0126 17:00:52.711681 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1546392-4a69-4b12-8d7e-97450b73b7ca-control-plane-machine-set-operator-tls podName:a1546392-4a69-4b12-8d7e-97450b73b7ca nodeName:}" failed. No retries permitted until 2026-01-26 17:00:53.211659938 +0000 UTC m=+149.164913919 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/a1546392-4a69-4b12-8d7e-97450b73b7ca-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-78cbb6b69f-rrhjv" (UID: "a1546392-4a69-4b12-8d7e-97450b73b7ca") : failed to sync secret cache: timed out waiting for the condition Jan 26 17:00:52 crc kubenswrapper[4856]: E0126 17:00:52.711698 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac10f013-cd1f-47e0-8f1c-5ff4e6e75784-metrics-tls podName:ac10f013-cd1f-47e0-8f1c-5ff4e6e75784 nodeName:}" failed. No retries permitted until 2026-01-26 17:00:53.211691179 +0000 UTC m=+149.164945160 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/ac10f013-cd1f-47e0-8f1c-5ff4e6e75784-metrics-tls") pod "dns-default-dgcqn" (UID: "ac10f013-cd1f-47e0-8f1c-5ff4e6e75784") : failed to sync secret cache: timed out waiting for the condition Jan 26 17:00:52 crc kubenswrapper[4856]: E0126 17:00:52.711765 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/550752e4-a1d9-46f4-9118-9e9919b2fe6b-config podName:550752e4-a1d9-46f4-9118-9e9919b2fe6b nodeName:}" failed. No retries permitted until 2026-01-26 17:00:53.21172791 +0000 UTC m=+149.164981961 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/550752e4-a1d9-46f4-9118-9e9919b2fe6b-config") pod "service-ca-operator-777779d784-cjzsq" (UID: "550752e4-a1d9-46f4-9118-9e9919b2fe6b") : failed to sync configmap cache: timed out waiting for the condition Jan 26 17:00:52 crc kubenswrapper[4856]: E0126 17:00:52.712817 4856 secret.go:188] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Jan 26 17:00:52 crc kubenswrapper[4856]: E0126 17:00:52.712875 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/437b5573-a342-4383-ba60-be0e3ccba839-certs podName:437b5573-a342-4383-ba60-be0e3ccba839 nodeName:}" failed. No retries permitted until 2026-01-26 17:00:53.212858843 +0000 UTC m=+149.166112885 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/437b5573-a342-4383-ba60-be0e3ccba839-certs") pod "machine-config-server-c9qdp" (UID: "437b5573-a342-4383-ba60-be0e3ccba839") : failed to sync secret cache: timed out waiting for the condition Jan 26 17:00:52 crc kubenswrapper[4856]: E0126 17:00:52.716106 4856 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Jan 26 17:00:52 crc kubenswrapper[4856]: E0126 17:00:52.716218 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/437b5573-a342-4383-ba60-be0e3ccba839-node-bootstrap-token podName:437b5573-a342-4383-ba60-be0e3ccba839 nodeName:}" failed. No retries permitted until 2026-01-26 17:00:53.216203682 +0000 UTC m=+149.169457663 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/437b5573-a342-4383-ba60-be0e3ccba839-node-bootstrap-token") pod "machine-config-server-c9qdp" (UID: "437b5573-a342-4383-ba60-be0e3ccba839") : failed to sync secret cache: timed out waiting for the condition Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.716995 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 17:00:52 crc kubenswrapper[4856]: E0126 17:00:52.722595 4856 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Jan 26 17:00:52 crc kubenswrapper[4856]: E0126 17:00:52.722662 4856 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 26 17:00:52 crc kubenswrapper[4856]: E0126 17:00:52.722686 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ac10f013-cd1f-47e0-8f1c-5ff4e6e75784-config-volume podName:ac10f013-cd1f-47e0-8f1c-5ff4e6e75784 nodeName:}" failed. No retries permitted until 2026-01-26 17:00:53.222666523 +0000 UTC m=+149.175920504 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ac10f013-cd1f-47e0-8f1c-5ff4e6e75784-config-volume") pod "dns-default-dgcqn" (UID: "ac10f013-cd1f-47e0-8f1c-5ff4e6e75784") : failed to sync configmap cache: timed out waiting for the condition Jan 26 17:00:52 crc kubenswrapper[4856]: E0126 17:00:52.722719 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/550752e4-a1d9-46f4-9118-9e9919b2fe6b-serving-cert podName:550752e4-a1d9-46f4-9118-9e9919b2fe6b nodeName:}" failed. No retries permitted until 2026-01-26 17:00:53.222705144 +0000 UTC m=+149.175959125 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/550752e4-a1d9-46f4-9118-9e9919b2fe6b-serving-cert") pod "service-ca-operator-777779d784-cjzsq" (UID: "550752e4-a1d9-46f4-9118-9e9919b2fe6b") : failed to sync secret cache: timed out waiting for the condition Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.736814 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.756729 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.776943 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.796275 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.816723 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.836689 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.856472 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.877053 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.897900 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.917570 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.937512 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.957286 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 17:00:52 crc kubenswrapper[4856]: I0126 17:00:52.978865 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.013329 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tmd5\" (UniqueName: \"kubernetes.io/projected/2ba3cf6a-a6be-4108-a155-c8bb530aa037-kube-api-access-6tmd5\") pod \"openshift-config-operator-7777fb866f-5bjl7\" (UID: \"2ba3cf6a-a6be-4108-a155-c8bb530aa037\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5bjl7" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.017511 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.037914 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.057020 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.076959 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.096409 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.117584 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.137482 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.149949 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddc2e6b7-5582-4579-bf2c-ed165b74c91a-config\") pod \"kube-apiserver-operator-766d6c64bb-vmwvg\" (UID: \"ddc2e6b7-5582-4579-bf2c-ed165b74c91a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vmwvg" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.150362 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ddc2e6b7-5582-4579-bf2c-ed165b74c91a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vmwvg\" (UID: \"ddc2e6b7-5582-4579-bf2c-ed165b74c91a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vmwvg" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.150988 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddc2e6b7-5582-4579-bf2c-ed165b74c91a-config\") pod \"kube-apiserver-operator-766d6c64bb-vmwvg\" (UID: \"ddc2e6b7-5582-4579-bf2c-ed165b74c91a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vmwvg" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.153582 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ddc2e6b7-5582-4579-bf2c-ed165b74c91a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vmwvg\" (UID: \"ddc2e6b7-5582-4579-bf2c-ed165b74c91a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vmwvg" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.157400 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.177045 4856 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.196929 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.217446 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.237143 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.252108 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/437b5573-a342-4383-ba60-be0e3ccba839-node-bootstrap-token\") pod \"machine-config-server-c9qdp\" (UID: \"437b5573-a342-4383-ba60-be0e3ccba839\") " pod="openshift-machine-config-operator/machine-config-server-c9qdp" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.252264 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/550752e4-a1d9-46f4-9118-9e9919b2fe6b-serving-cert\") pod \"service-ca-operator-777779d784-cjzsq\" (UID: \"550752e4-a1d9-46f4-9118-9e9919b2fe6b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjzsq" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.252307 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac10f013-cd1f-47e0-8f1c-5ff4e6e75784-config-volume\") pod \"dns-default-dgcqn\" (UID: \"ac10f013-cd1f-47e0-8f1c-5ff4e6e75784\") " pod="openshift-dns/dns-default-dgcqn" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.252452 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ac10f013-cd1f-47e0-8f1c-5ff4e6e75784-metrics-tls\") pod \"dns-default-dgcqn\" (UID: \"ac10f013-cd1f-47e0-8f1c-5ff4e6e75784\") " pod="openshift-dns/dns-default-dgcqn" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.252488 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/550752e4-a1d9-46f4-9118-9e9919b2fe6b-config\") pod \"service-ca-operator-777779d784-cjzsq\" (UID: \"550752e4-a1d9-46f4-9118-9e9919b2fe6b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjzsq" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.252746 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a1546392-4a69-4b12-8d7e-97450b73b7ca-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rrhjv\" (UID: \"a1546392-4a69-4b12-8d7e-97450b73b7ca\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrhjv" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.252869 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/437b5573-a342-4383-ba60-be0e3ccba839-certs\") pod \"machine-config-server-c9qdp\" (UID: \"437b5573-a342-4383-ba60-be0e3ccba839\") " pod="openshift-machine-config-operator/machine-config-server-c9qdp" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.253242 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/550752e4-a1d9-46f4-9118-9e9919b2fe6b-config\") pod \"service-ca-operator-777779d784-cjzsq\" (UID: \"550752e4-a1d9-46f4-9118-9e9919b2fe6b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjzsq" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.255327 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/550752e4-a1d9-46f4-9118-9e9919b2fe6b-serving-cert\") pod \"service-ca-operator-777779d784-cjzsq\" (UID: \"550752e4-a1d9-46f4-9118-9e9919b2fe6b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjzsq" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.255360 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a1546392-4a69-4b12-8d7e-97450b73b7ca-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rrhjv\" (UID: \"a1546392-4a69-4b12-8d7e-97450b73b7ca\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrhjv" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.256982 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.260698 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5bjl7" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.266740 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/437b5573-a342-4383-ba60-be0e3ccba839-certs\") pod \"machine-config-server-c9qdp\" (UID: \"437b5573-a342-4383-ba60-be0e3ccba839\") " pod="openshift-machine-config-operator/machine-config-server-c9qdp" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.278583 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.286999 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/437b5573-a342-4383-ba60-be0e3ccba839-node-bootstrap-token\") pod \"machine-config-server-c9qdp\" (UID: \"437b5573-a342-4383-ba60-be0e3ccba839\") " pod="openshift-machine-config-operator/machine-config-server-c9qdp" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.296742 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.317110 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.327228 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ac10f013-cd1f-47e0-8f1c-5ff4e6e75784-metrics-tls\") pod \"dns-default-dgcqn\" (UID: \"ac10f013-cd1f-47e0-8f1c-5ff4e6e75784\") " pod="openshift-dns/dns-default-dgcqn" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.336282 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.343374 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac10f013-cd1f-47e0-8f1c-5ff4e6e75784-config-volume\") pod \"dns-default-dgcqn\" (UID: \"ac10f013-cd1f-47e0-8f1c-5ff4e6e75784\") " pod="openshift-dns/dns-default-dgcqn" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.394026 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdh5l\" (UniqueName: \"kubernetes.io/projected/149e3000-35d7-47bd-83f0-00ab5e0736c2-kube-api-access-mdh5l\") pod \"kube-storage-version-migrator-operator-b67b599dd-q7j7b\" (UID: \"149e3000-35d7-47bd-83f0-00ab5e0736c2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q7j7b" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.412151 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr8gn\" (UniqueName: \"kubernetes.io/projected/1afc0f4c-e02d-4a70-aaba-e761e8c04eee-kube-api-access-mr8gn\") pod \"controller-manager-879f6c89f-lndnt\" (UID: \"1afc0f4c-e02d-4a70-aaba-e761e8c04eee\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lndnt" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.433594 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg98g\" (UniqueName: \"kubernetes.io/projected/a6d331bd-2db3-4319-9f5c-db56d408d9e3-kube-api-access-lg98g\") pod \"apiserver-76f77b778f-6rlxp\" (UID: \"a6d331bd-2db3-4319-9f5c-db56d408d9e3\") " pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.471672 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf1f11c8-17b8-49b7-b12d-92891f478222-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-7p5jt\" (UID: \"bf1f11c8-17b8-49b7-b12d-92891f478222\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7p5jt" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.502158 4856 request.go:700] Waited for 1.90475386s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/serviceaccounts/openshift-apiserver-operator/token Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.503284 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpfwk\" (UniqueName: \"kubernetes.io/projected/5fe6baed-ab97-4d8a-8be2-6f00f9698136-kube-api-access-hpfwk\") pod \"route-controller-manager-6576b87f9c-fpqvc\" (UID: \"5fe6baed-ab97-4d8a-8be2-6f00f9698136\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fpqvc" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.504773 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkngl\" (UniqueName: \"kubernetes.io/projected/5c244eff-aada-44f3-b250-96878a3400c4-kube-api-access-nkngl\") pod \"etcd-operator-b45778765-27vjc\" (UID: \"5c244eff-aada-44f3-b250-96878a3400c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-27vjc" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.518271 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkl7p\" (UniqueName: \"kubernetes.io/projected/359660cd-b412-4640-bedf-993e976e7b3c-kube-api-access-rkl7p\") pod \"openshift-apiserver-operator-796bbdcf4f-88lkr\" (UID: \"359660cd-b412-4640-bedf-993e976e7b3c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-88lkr" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.532432 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42c0e428-821f-45a1-85a7-54ebdb81ef1c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cl895\" (UID: \"42c0e428-821f-45a1-85a7-54ebdb81ef1c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cl895" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.552457 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vq4z\" (UniqueName: \"kubernetes.io/projected/a7e5d16a-d45d-4b40-93bf-bfaa6be2d1c0-kube-api-access-7vq4z\") pod \"authentication-operator-69f744f599-jdjcq\" (UID: \"a7e5d16a-d45d-4b40-93bf-bfaa6be2d1c0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jdjcq" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.571337 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6fxv\" (UniqueName: \"kubernetes.io/projected/f9b8f57e-00b9-4355-ace2-0319d320d208-kube-api-access-d6fxv\") pod \"packageserver-d55dfcdfc-mr7cp\" (UID: \"f9b8f57e-00b9-4355-ace2-0319d320d208\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mr7cp" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.574001 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5bjl7"] Jan 26 17:00:53 crc kubenswrapper[4856]: W0126 17:00:53.582509 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ba3cf6a_a6be_4108_a155_c8bb530aa037.slice/crio-f9c4b2e98acee19d774149a7d740d7163dacd0dcb0f25054caba9129d5ee7274 WatchSource:0}: Error finding container f9c4b2e98acee19d774149a7d740d7163dacd0dcb0f25054caba9129d5ee7274: Status 404 returned error can't find the container with id f9c4b2e98acee19d774149a7d740d7163dacd0dcb0f25054caba9129d5ee7274 Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.595470 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf2b2\" (UniqueName: \"kubernetes.io/projected/69008ed1-f3e5-400d-852f-adbcd94199f6-kube-api-access-kf2b2\") pod \"oauth-openshift-558db77b4-cb8nk\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.612871 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvkhn\" (UniqueName: \"kubernetes.io/projected/b28404ed-2e71-4b3f-9140-35ee89dbc8f2-kube-api-access-dvkhn\") pod \"console-f9d7485db-6qgnn\" (UID: \"b28404ed-2e71-4b3f-9140-35ee89dbc8f2\") " pod="openshift-console/console-f9d7485db-6qgnn" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.616795 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-88lkr" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.637362 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2274g\" (UniqueName: \"kubernetes.io/projected/54a246a2-f674-4735-b295-b56699ece95b-kube-api-access-2274g\") pod \"machine-approver-56656f9798-962cr\" (UID: \"54a246a2-f674-4735-b295-b56699ece95b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-962cr" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.639981 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6qgnn" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.652007 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-jdjcq" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.653069 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa11789e-7a2a-4dbf-85ca-c20a9d64a1f4-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6snv6\" (UID: \"fa11789e-7a2a-4dbf-85ca-c20a9d64a1f4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6snv6" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.669974 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-27vjc" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.675172 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5scg\" (UniqueName: \"kubernetes.io/projected/033cb12f-278f-431a-8104-519db9a3152f-kube-api-access-s5scg\") pod \"service-ca-9c57cc56f-gz7kg\" (UID: \"033cb12f-278f-431a-8104-519db9a3152f\") " pod="openshift-service-ca/service-ca-9c57cc56f-gz7kg" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.677901 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q7j7b" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.695338 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.696909 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2wtm\" (UniqueName: \"kubernetes.io/projected/0c1af7db-aa80-4cb0-a9cb-5afdf677f28c-kube-api-access-v2wtm\") pod \"cluster-samples-operator-665b6dd947-4w5bf\" (UID: \"0c1af7db-aa80-4cb0-a9cb-5afdf677f28c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4w5bf" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.706005 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lndnt" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.711475 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qcw5\" (UniqueName: \"kubernetes.io/projected/bf1f11c8-17b8-49b7-b12d-92891f478222-kube-api-access-9qcw5\") pod \"cluster-image-registry-operator-dc59b4c8b-7p5jt\" (UID: \"bf1f11c8-17b8-49b7-b12d-92891f478222\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7p5jt" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.736144 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgb6k\" (UniqueName: \"kubernetes.io/projected/94291fa4-24a5-499e-8143-89c8784d9284-kube-api-access-hgb6k\") pod \"downloads-7954f5f757-7l927\" (UID: \"94291fa4-24a5-499e-8143-89c8784d9284\") " pod="openshift-console/downloads-7954f5f757-7l927" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.760076 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ddc2e6b7-5582-4579-bf2c-ed165b74c91a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vmwvg\" (UID: \"ddc2e6b7-5582-4579-bf2c-ed165b74c91a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vmwvg" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.765131 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6snv6" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.783205 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2wnd\" (UniqueName: \"kubernetes.io/projected/831dc87e-8e14-43d3-a36e-dc7679041ae5-kube-api-access-d2wnd\") pod \"console-operator-58897d9998-4pbj2\" (UID: \"831dc87e-8e14-43d3-a36e-dc7679041ae5\") " pod="openshift-console-operator/console-operator-58897d9998-4pbj2" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.785893 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fpqvc" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.800304 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cl895" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.803934 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqhmb\" (UniqueName: \"kubernetes.io/projected/81c2f96b-55e0-483b-b72c-df7e156e9218-kube-api-access-rqhmb\") pod \"apiserver-7bbb656c7d-6cghs\" (UID: \"81c2f96b-55e0-483b-b72c-df7e156e9218\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cghs" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.835956 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-88lkr"] Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.838090 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4w5bf" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.838914 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjhr5\" (UniqueName: \"kubernetes.io/projected/beb6f283-75cb-4184-b985-4e6c095feca1-kube-api-access-mjhr5\") pod \"multus-admission-controller-857f4d67dd-ddghz\" (UID: \"beb6f283-75cb-4184-b985-4e6c095feca1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ddghz" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.844126 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7lk4\" (UniqueName: \"kubernetes.io/projected/85f05bd5-ff83-4d29-9531-ab3499088095-kube-api-access-x7lk4\") pod \"router-default-5444994796-h9b2g\" (UID: \"85f05bd5-ff83-4d29-9531-ab3499088095\") " pod="openshift-ingress/router-default-5444994796-h9b2g" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.848772 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-7l927" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.851848 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mr7cp" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.861735 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c475g\" (UniqueName: \"kubernetes.io/projected/77a97acb-2908-48fb-8bcd-0647f3e90160-kube-api-access-c475g\") pod \"machine-api-operator-5694c8668f-7xb2b\" (UID: \"77a97acb-2908-48fb-8bcd-0647f3e90160\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7xb2b" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.867297 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-gz7kg" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.873280 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr9jp\" (UniqueName: \"kubernetes.io/projected/004316da-16cd-49ab-b14d-282c28da6fad-kube-api-access-nr9jp\") pod \"package-server-manager-789f6589d5-8m4l6\" (UID: \"004316da-16cd-49ab-b14d-282c28da6fad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8m4l6" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.873658 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.873765 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-6qgnn"] Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.885897 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7p5jt" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.894477 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-4pbj2" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.895234 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73de6ef2-e139-4185-9f56-9db885734ffe-bound-sa-token\") pod \"ingress-operator-5b745b69d9-58fcz\" (UID: \"73de6ef2-e139-4185-9f56-9db885734ffe\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-58fcz" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.910693 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v8bp\" (UniqueName: \"kubernetes.io/projected/ac10f013-cd1f-47e0-8f1c-5ff4e6e75784-kube-api-access-6v8bp\") pod \"dns-default-dgcqn\" (UID: \"ac10f013-cd1f-47e0-8f1c-5ff4e6e75784\") " pod="openshift-dns/dns-default-dgcqn" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.911293 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vmwvg" Jan 26 17:00:53 crc kubenswrapper[4856]: W0126 17:00:53.918238 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod359660cd_b412_4640_bedf_993e976e7b3c.slice/crio-67342ac4fc36ecabd1a17f918aa59bd99af37bddc54a8e3deb2760014c345a08 WatchSource:0}: Error finding container 67342ac4fc36ecabd1a17f918aa59bd99af37bddc54a8e3deb2760014c345a08: Status 404 returned error can't find the container with id 67342ac4fc36ecabd1a17f918aa59bd99af37bddc54a8e3deb2760014c345a08 Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.924386 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-962cr" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.946411 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zchmc\" (UniqueName: \"kubernetes.io/projected/550752e4-a1d9-46f4-9118-9e9919b2fe6b-kube-api-access-zchmc\") pod \"service-ca-operator-777779d784-cjzsq\" (UID: \"550752e4-a1d9-46f4-9118-9e9919b2fe6b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjzsq" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.956226 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vxn2\" (UniqueName: \"kubernetes.io/projected/437b5573-a342-4383-ba60-be0e3ccba839-kube-api-access-9vxn2\") pod \"machine-config-server-c9qdp\" (UID: \"437b5573-a342-4383-ba60-be0e3ccba839\") " pod="openshift-machine-config-operator/machine-config-server-c9qdp" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.960168 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-7xb2b" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.972347 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-c9qdp" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.979880 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-dgcqn" Jan 26 17:00:53 crc kubenswrapper[4856]: I0126 17:00:53.984020 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kkkd\" (UniqueName: \"kubernetes.io/projected/129a0b30-7132-4e3c-ab84-208cae7cb2f2-kube-api-access-6kkkd\") pod \"machine-config-operator-74547568cd-zzxln\" (UID: \"129a0b30-7132-4e3c-ab84-208cae7cb2f2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zzxln" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.006212 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdxxq\" (UniqueName: \"kubernetes.io/projected/73de6ef2-e139-4185-9f56-9db885734ffe-kube-api-access-hdxxq\") pod \"ingress-operator-5b745b69d9-58fcz\" (UID: \"73de6ef2-e139-4185-9f56-9db885734ffe\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-58fcz" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.034627 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cghs" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.039731 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxgfj\" (UniqueName: \"kubernetes.io/projected/7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7-kube-api-access-kxgfj\") pod \"collect-profiles-29490780-8q6q4\" (UID: \"7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-8q6q4" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.053814 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-h9b2g" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.059944 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwmjs\" (UniqueName: \"kubernetes.io/projected/a1546392-4a69-4b12-8d7e-97450b73b7ca-kube-api-access-pwmjs\") pod \"control-plane-machine-set-operator-78cbb6b69f-rrhjv\" (UID: \"a1546392-4a69-4b12-8d7e-97450b73b7ca\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrhjv" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.064562 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wxzv\" (UniqueName: \"kubernetes.io/projected/17a72e73-4d54-4a29-a85a-ecb1aff30d10-kube-api-access-9wxzv\") pod \"olm-operator-6b444d44fb-k662z\" (UID: \"17a72e73-4d54-4a29-a85a-ecb1aff30d10\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k662z" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.079023 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6clr\" (UniqueName: \"kubernetes.io/projected/37a77f41-5dbf-4842-9e77-83dc22b50f4a-kube-api-access-w6clr\") pod \"migrator-59844c95c7-qdmxz\" (UID: \"37a77f41-5dbf-4842-9e77-83dc22b50f4a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdmxz" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.103594 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-ddghz" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.104683 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8qsf\" (UniqueName: \"kubernetes.io/projected/2d37efbf-d18f-486b-9b43-bc4d181af4ca-kube-api-access-b8qsf\") pod \"marketplace-operator-79b997595-wvttb\" (UID: \"2d37efbf-d18f-486b-9b43-bc4d181af4ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-wvttb" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.106917 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5bjl7" event={"ID":"2ba3cf6a-a6be-4108-a155-c8bb530aa037","Type":"ContainerStarted","Data":"d89a3287b371ef0cb23bab1eb475eccd1999b48798e71ba74c19f952107aef34"} Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.107168 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5bjl7" event={"ID":"2ba3cf6a-a6be-4108-a155-c8bb530aa037","Type":"ContainerStarted","Data":"f9c4b2e98acee19d774149a7d740d7163dacd0dcb0f25054caba9129d5ee7274"} Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.109125 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6qgnn" event={"ID":"b28404ed-2e71-4b3f-9140-35ee89dbc8f2","Type":"ContainerStarted","Data":"8d60fe83a3b8c25a6706fa61b15688fd93c7ae27849eb62b7e61218c7bdddb31"} Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.110414 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zzxln" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.112374 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-88lkr" event={"ID":"359660cd-b412-4640-bedf-993e976e7b3c","Type":"ContainerStarted","Data":"67342ac4fc36ecabd1a17f918aa59bd99af37bddc54a8e3deb2760014c345a08"} Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.123683 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc4dg\" (UniqueName: \"kubernetes.io/projected/abbeffe1-cfd5-4476-9a8e-2ab5b4869444-kube-api-access-hc4dg\") pod \"catalog-operator-68c6474976-nn46h\" (UID: \"abbeffe1-cfd5-4476-9a8e-2ab5b4869444\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nn46h" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.132331 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k662z" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.137776 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nn46h" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.139263 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgzl5\" (UniqueName: \"kubernetes.io/projected/c8657575-cd22-4ebc-ae9d-4174366985d3-kube-api-access-fgzl5\") pod \"csi-hostpathplugin-vfm8t\" (UID: \"c8657575-cd22-4ebc-ae9d-4174366985d3\") " pod="hostpath-provisioner/csi-hostpathplugin-vfm8t" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.142867 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8m4l6" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.163423 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4z5z\" (UniqueName: \"kubernetes.io/projected/05d74105-0ecd-41ac-9001-8b21b0fd6ba4-kube-api-access-m4z5z\") pod \"openshift-controller-manager-operator-756b6f6bc6-l9nqd\" (UID: \"05d74105-0ecd-41ac-9001-8b21b0fd6ba4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l9nqd" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.173855 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wvttb" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.180295 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.184004 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jdjcq"] Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.184080 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q7j7b"] Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.184093 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-8q6q4" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.187263 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2hpl\" (UniqueName: \"kubernetes.io/projected/a4d83db5-776f-4e95-a6fa-b194344f9819-kube-api-access-t2hpl\") pod \"machine-config-controller-84d6567774-2sfhr\" (UID: \"a4d83db5-776f-4e95-a6fa-b194344f9819\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2sfhr" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.192346 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrhjv" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.200600 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.202022 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjzsq" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.218166 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.239004 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.258175 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.260684 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-vfm8t" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.266426 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-6rlxp"] Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.277860 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.285519 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdmxz" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.287494 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lndnt"] Jan 26 17:00:54 crc kubenswrapper[4856]: W0126 17:00:54.288312 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7e5d16a_d45d_4b40_93bf_bfaa6be2d1c0.slice/crio-ef1cb2622920f923afb73fc12eda34877038d999d31cb4114611a29f8ca1fbd5 WatchSource:0}: Error finding container ef1cb2622920f923afb73fc12eda34877038d999d31cb4114611a29f8ca1fbd5: Status 404 returned error can't find the container with id ef1cb2622920f923afb73fc12eda34877038d999d31cb4114611a29f8ca1fbd5 Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.294118 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-58fcz" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.310543 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-27vjc"] Jan 26 17:00:54 crc kubenswrapper[4856]: W0126 17:00:54.345946 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6d331bd_2db3_4319_9f5c_db56d408d9e3.slice/crio-bae2856860ccbfbac0017e7da1d4f30234d58daef19aaf9d48d7644f9aac2b38 WatchSource:0}: Error finding container bae2856860ccbfbac0017e7da1d4f30234d58daef19aaf9d48d7644f9aac2b38: Status 404 returned error can't find the container with id bae2856860ccbfbac0017e7da1d4f30234d58daef19aaf9d48d7644f9aac2b38 Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.395917 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/113d2266-0e67-4e79-8a17-1a78cb9a13d5-metrics-tls\") pod \"dns-operator-744455d44c-z7cgq\" (UID: \"113d2266-0e67-4e79-8a17-1a78cb9a13d5\") " pod="openshift-dns-operator/dns-operator-744455d44c-z7cgq" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.396009 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cfa40861-cc08-4145-a185-6a3fb07eaabe-trusted-ca\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.396071 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf448\" (UniqueName: \"kubernetes.io/projected/cfa40861-cc08-4145-a185-6a3fb07eaabe-kube-api-access-tf448\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.396245 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9tb9\" (UniqueName: \"kubernetes.io/projected/cb9fb12b-3eb8-4e94-a8cf-9eaf4703a850-kube-api-access-s9tb9\") pod \"ingress-canary-fbsj7\" (UID: \"cb9fb12b-3eb8-4e94-a8cf-9eaf4703a850\") " pod="openshift-ingress-canary/ingress-canary-fbsj7" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.396334 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q847\" (UniqueName: \"kubernetes.io/projected/113d2266-0e67-4e79-8a17-1a78cb9a13d5-kube-api-access-5q847\") pod \"dns-operator-744455d44c-z7cgq\" (UID: \"113d2266-0e67-4e79-8a17-1a78cb9a13d5\") " pod="openshift-dns-operator/dns-operator-744455d44c-z7cgq" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.396441 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cfa40861-cc08-4145-a185-6a3fb07eaabe-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.396562 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.396664 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cb9fb12b-3eb8-4e94-a8cf-9eaf4703a850-cert\") pod \"ingress-canary-fbsj7\" (UID: \"cb9fb12b-3eb8-4e94-a8cf-9eaf4703a850\") " pod="openshift-ingress-canary/ingress-canary-fbsj7" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.396726 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cfa40861-cc08-4145-a185-6a3fb07eaabe-registry-certificates\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.396946 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cfa40861-cc08-4145-a185-6a3fb07eaabe-registry-tls\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.396974 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cfa40861-cc08-4145-a185-6a3fb07eaabe-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.397007 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cfa40861-cc08-4145-a185-6a3fb07eaabe-bound-sa-token\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:54 crc kubenswrapper[4856]: E0126 17:00:54.399329 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:00:54.899306536 +0000 UTC m=+150.852560567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.403338 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2sfhr" Jan 26 17:00:54 crc kubenswrapper[4856]: W0126 17:00:54.407463 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1afc0f4c_e02d_4a70_aaba_e761e8c04eee.slice/crio-96859d6a59b58c9df792a590deef50eb0ee923d03cb16fdc72abe3d18e466eaa WatchSource:0}: Error finding container 96859d6a59b58c9df792a590deef50eb0ee923d03cb16fdc72abe3d18e466eaa: Status 404 returned error can't find the container with id 96859d6a59b58c9df792a590deef50eb0ee923d03cb16fdc72abe3d18e466eaa Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.408068 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6snv6"] Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.416589 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4w5bf"] Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.442634 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fpqvc"] Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.459200 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l9nqd" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.478700 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4pbj2"] Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.478750 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vmwvg"] Jan 26 17:00:54 crc kubenswrapper[4856]: W0126 17:00:54.482935 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa11789e_7a2a_4dbf_85ca_c20a9d64a1f4.slice/crio-92cb1cfef16a0e75e3959ea9e1e938f0b67fd0c0c799ef25c376a0bf826c395e WatchSource:0}: Error finding container 92cb1cfef16a0e75e3959ea9e1e938f0b67fd0c0c799ef25c376a0bf826c395e: Status 404 returned error can't find the container with id 92cb1cfef16a0e75e3959ea9e1e938f0b67fd0c0c799ef25c376a0bf826c395e Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.497894 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.498219 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cb9fb12b-3eb8-4e94-a8cf-9eaf4703a850-cert\") pod \"ingress-canary-fbsj7\" (UID: \"cb9fb12b-3eb8-4e94-a8cf-9eaf4703a850\") " pod="openshift-ingress-canary/ingress-canary-fbsj7" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.498243 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cfa40861-cc08-4145-a185-6a3fb07eaabe-registry-certificates\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.498394 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cfa40861-cc08-4145-a185-6a3fb07eaabe-registry-tls\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.498421 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cfa40861-cc08-4145-a185-6a3fb07eaabe-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.498443 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cfa40861-cc08-4145-a185-6a3fb07eaabe-bound-sa-token\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.499237 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/113d2266-0e67-4e79-8a17-1a78cb9a13d5-metrics-tls\") pod \"dns-operator-744455d44c-z7cgq\" (UID: \"113d2266-0e67-4e79-8a17-1a78cb9a13d5\") " pod="openshift-dns-operator/dns-operator-744455d44c-z7cgq" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.499287 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cfa40861-cc08-4145-a185-6a3fb07eaabe-trusted-ca\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.499327 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf448\" (UniqueName: \"kubernetes.io/projected/cfa40861-cc08-4145-a185-6a3fb07eaabe-kube-api-access-tf448\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.499868 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9tb9\" (UniqueName: \"kubernetes.io/projected/cb9fb12b-3eb8-4e94-a8cf-9eaf4703a850-kube-api-access-s9tb9\") pod \"ingress-canary-fbsj7\" (UID: \"cb9fb12b-3eb8-4e94-a8cf-9eaf4703a850\") " pod="openshift-ingress-canary/ingress-canary-fbsj7" Jan 26 17:00:54 crc kubenswrapper[4856]: E0126 17:00:54.499958 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:00:54.999940524 +0000 UTC m=+150.953194505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.500009 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q847\" (UniqueName: \"kubernetes.io/projected/113d2266-0e67-4e79-8a17-1a78cb9a13d5-kube-api-access-5q847\") pod \"dns-operator-744455d44c-z7cgq\" (UID: \"113d2266-0e67-4e79-8a17-1a78cb9a13d5\") " pod="openshift-dns-operator/dns-operator-744455d44c-z7cgq" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.500096 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cfa40861-cc08-4145-a185-6a3fb07eaabe-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.504411 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cfa40861-cc08-4145-a185-6a3fb07eaabe-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.509206 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cfa40861-cc08-4145-a185-6a3fb07eaabe-trusted-ca\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.510927 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cfa40861-cc08-4145-a185-6a3fb07eaabe-registry-certificates\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.513685 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/113d2266-0e67-4e79-8a17-1a78cb9a13d5-metrics-tls\") pod \"dns-operator-744455d44c-z7cgq\" (UID: \"113d2266-0e67-4e79-8a17-1a78cb9a13d5\") " pod="openshift-dns-operator/dns-operator-744455d44c-z7cgq" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.513797 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cfa40861-cc08-4145-a185-6a3fb07eaabe-registry-tls\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.516255 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cb9fb12b-3eb8-4e94-a8cf-9eaf4703a850-cert\") pod \"ingress-canary-fbsj7\" (UID: \"cb9fb12b-3eb8-4e94-a8cf-9eaf4703a850\") " pod="openshift-ingress-canary/ingress-canary-fbsj7" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.537322 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cfa40861-cc08-4145-a185-6a3fb07eaabe-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.540633 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-7l927"] Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.558107 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9tb9\" (UniqueName: \"kubernetes.io/projected/cb9fb12b-3eb8-4e94-a8cf-9eaf4703a850-kube-api-access-s9tb9\") pod \"ingress-canary-fbsj7\" (UID: \"cb9fb12b-3eb8-4e94-a8cf-9eaf4703a850\") " pod="openshift-ingress-canary/ingress-canary-fbsj7" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.566669 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cfa40861-cc08-4145-a185-6a3fb07eaabe-bound-sa-token\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.567043 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cl895"] Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.580077 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q847\" (UniqueName: \"kubernetes.io/projected/113d2266-0e67-4e79-8a17-1a78cb9a13d5-kube-api-access-5q847\") pod \"dns-operator-744455d44c-z7cgq\" (UID: \"113d2266-0e67-4e79-8a17-1a78cb9a13d5\") " pod="openshift-dns-operator/dns-operator-744455d44c-z7cgq" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.597983 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7p5jt"] Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.601355 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mr7cp"] Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.601810 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf448\" (UniqueName: \"kubernetes.io/projected/cfa40861-cc08-4145-a185-6a3fb07eaabe-kube-api-access-tf448\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.604384 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:54 crc kubenswrapper[4856]: E0126 17:00:54.604932 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:00:55.10491575 +0000 UTC m=+151.058169731 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.611452 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cb8nk"] Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.707426 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:00:54 crc kubenswrapper[4856]: E0126 17:00:54.708834 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:00:55.207609818 +0000 UTC m=+151.160863799 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.728725 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:54 crc kubenswrapper[4856]: E0126 17:00:54.729429 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:00:55.229411991 +0000 UTC m=+151.182665982 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.732617 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-gz7kg"] Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.750321 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-7xb2b"] Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.759878 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6cghs"] Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.827315 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-z7cgq" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.829706 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:00:54 crc kubenswrapper[4856]: E0126 17:00:54.829965 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:00:55.329941756 +0000 UTC m=+151.283195737 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.830373 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:54 crc kubenswrapper[4856]: E0126 17:00:54.831562 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:00:55.331538053 +0000 UTC m=+151.284792034 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.835105 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-fbsj7" Jan 26 17:00:54 crc kubenswrapper[4856]: I0126 17:00:54.932157 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:00:54 crc kubenswrapper[4856]: E0126 17:00:54.932751 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:00:55.432729387 +0000 UTC m=+151.385983378 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.034096 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:55 crc kubenswrapper[4856]: E0126 17:00:55.034687 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:00:55.534672333 +0000 UTC m=+151.487926314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.134962 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:00:55 crc kubenswrapper[4856]: E0126 17:00:55.135397 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:00:55.635378093 +0000 UTC m=+151.588632074 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.183873 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-7xb2b" event={"ID":"77a97acb-2908-48fb-8bcd-0647f3e90160","Type":"ContainerStarted","Data":"71ba6cb40fc671d8e66509146fc662f55be8bb98bd417ad56358526161b87367"} Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.192440 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-dgcqn"] Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.210305 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-7l927" event={"ID":"94291fa4-24a5-499e-8143-89c8784d9284","Type":"ContainerStarted","Data":"82813129de322c75fa39cc94c94ea8625d6eed2d3ea3f14dd0db7911c9652bb4"} Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.227179 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8m4l6"] Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.233723 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cjzsq"] Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.235743 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7p5jt" event={"ID":"bf1f11c8-17b8-49b7-b12d-92891f478222","Type":"ContainerStarted","Data":"847f817e4681c5f1a2ff6a9b03f573316d052751e866dfd065b55170125ec233"} Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.237931 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:55 crc kubenswrapper[4856]: E0126 17:00:55.238384 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:00:55.73837045 +0000 UTC m=+151.691624431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.253597 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-27vjc" event={"ID":"5c244eff-aada-44f3-b250-96878a3400c4","Type":"ContainerStarted","Data":"27da60ab04d9c096ac5db9ea266e1bcf3e305705808f89e734cdc0d040595272"} Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.259592 4856 generic.go:334] "Generic (PLEG): container finished" podID="2ba3cf6a-a6be-4108-a155-c8bb530aa037" containerID="d89a3287b371ef0cb23bab1eb475eccd1999b48798e71ba74c19f952107aef34" exitCode=0 Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.259682 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5bjl7" event={"ID":"2ba3cf6a-a6be-4108-a155-c8bb530aa037","Type":"ContainerDied","Data":"d89a3287b371ef0cb23bab1eb475eccd1999b48798e71ba74c19f952107aef34"} Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.273506 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cghs" event={"ID":"81c2f96b-55e0-483b-b72c-df7e156e9218","Type":"ContainerStarted","Data":"dac4ba81d1cdc88dc980c2ceb845187c7a1ed41d1d2eec1cf749f33ac5b8b442"} Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.278198 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cl895" event={"ID":"42c0e428-821f-45a1-85a7-54ebdb81ef1c","Type":"ContainerStarted","Data":"cec55d4172500b35f1678464fe1ce0649bb48ca145dea45b1d9baeaa952e2041"} Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.285081 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mr7cp" event={"ID":"f9b8f57e-00b9-4355-ace2-0319d320d208","Type":"ContainerStarted","Data":"5323c8c9fdb9e4225fb31744181dc0b0bd41776e30729fe779fc73fb5659e9a6"} Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.288690 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-h9b2g" event={"ID":"85f05bd5-ff83-4d29-9531-ab3499088095","Type":"ContainerStarted","Data":"1747ce02dc365e25bdb1e14cb852860f0ec5220a32a63437c450ff9da4361ed5"} Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.293601 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-gz7kg" event={"ID":"033cb12f-278f-431a-8104-519db9a3152f","Type":"ContainerStarted","Data":"361fe6998603fd54ac5cfe0959e0cbc545d3c5e143e033477253dcf3d57d7a23"} Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.297840 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-88lkr" event={"ID":"359660cd-b412-4640-bedf-993e976e7b3c","Type":"ContainerStarted","Data":"fe76caa8fee51b66026c972b86975afe11768c8e99f4e26a8014d26d3187b8d4"} Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.300630 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-c9qdp" event={"ID":"437b5573-a342-4383-ba60-be0e3ccba839","Type":"ContainerStarted","Data":"5703d26d139036e12af52a438fa520c387d8a24070f61cff00688fb6c5224867"} Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.304710 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-jdjcq" event={"ID":"a7e5d16a-d45d-4b40-93bf-bfaa6be2d1c0","Type":"ContainerStarted","Data":"ef1cb2622920f923afb73fc12eda34877038d999d31cb4114611a29f8ca1fbd5"} Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.314657 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vmwvg" event={"ID":"ddc2e6b7-5582-4579-bf2c-ed165b74c91a","Type":"ContainerStarted","Data":"9c8b75ffa9d72b626a3cbee0eb9647978bd9abaf50267db3c9990debab4058e3"} Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.317693 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" event={"ID":"a6d331bd-2db3-4319-9f5c-db56d408d9e3","Type":"ContainerStarted","Data":"bae2856860ccbfbac0017e7da1d4f30234d58daef19aaf9d48d7644f9aac2b38"} Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.319350 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6snv6" event={"ID":"fa11789e-7a2a-4dbf-85ca-c20a9d64a1f4","Type":"ContainerStarted","Data":"92cb1cfef16a0e75e3959ea9e1e938f0b67fd0c0c799ef25c376a0bf826c395e"} Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.322318 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-962cr" event={"ID":"54a246a2-f674-4735-b295-b56699ece95b","Type":"ContainerStarted","Data":"2953035f4fdd7266bb0ef7eba50a3cc88c7ef83c9d339275478ed0a1da8e092e"} Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.323273 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" event={"ID":"69008ed1-f3e5-400d-852f-adbcd94199f6","Type":"ContainerStarted","Data":"d2e5352f5a4f0bdf4461c4b926a9353c0b4a673c6263c30adba1a3d7a2d6a8ad"} Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.324052 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fpqvc" event={"ID":"5fe6baed-ab97-4d8a-8be2-6f00f9698136","Type":"ContainerStarted","Data":"e3a4f0c156036789efac8b4cdbd3ace5dcdaf8c187d261687c3b9c87a15d74df"} Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.325578 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lndnt" event={"ID":"1afc0f4c-e02d-4a70-aaba-e761e8c04eee","Type":"ContainerStarted","Data":"96859d6a59b58c9df792a590deef50eb0ee923d03cb16fdc72abe3d18e466eaa"} Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.326093 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-lndnt" Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.328367 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q7j7b" event={"ID":"149e3000-35d7-47bd-83f0-00ab5e0736c2","Type":"ContainerStarted","Data":"b8e5f5596ec8c94d661b87d0c08b723c1088f87879918a4b4f2e2a14cb1358fb"} Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.340427 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:00:55 crc kubenswrapper[4856]: E0126 17:00:55.340780 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:00:55.840747919 +0000 UTC m=+151.794001900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.340427 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6qgnn" event={"ID":"b28404ed-2e71-4b3f-9140-35ee89dbc8f2","Type":"ContainerStarted","Data":"cf3a5cf0d543759d7e8e6e68a6bd4c1b71efee63ebda8e6116e343b588f4f9f9"} Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.341580 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:55 crc kubenswrapper[4856]: E0126 17:00:55.342178 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:00:55.8421554 +0000 UTC m=+151.795409441 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.343083 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-4pbj2" event={"ID":"831dc87e-8e14-43d3-a36e-dc7679041ae5","Type":"ContainerStarted","Data":"65096e9aeb5bb4bd72bc2e5027f15d210d5133a05fe55fb568baf6443a93e32c"} Jan 26 17:00:55 crc kubenswrapper[4856]: W0126 17:00:55.437917 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac10f013_cd1f_47e0_8f1c_5ff4e6e75784.slice/crio-b81ebf6bd6b806c8fa7a6e0ba1caa582537dbc1e86b1c60f8089719a1ee5c590 WatchSource:0}: Error finding container b81ebf6bd6b806c8fa7a6e0ba1caa582537dbc1e86b1c60f8089719a1ee5c590: Status 404 returned error can't find the container with id b81ebf6bd6b806c8fa7a6e0ba1caa582537dbc1e86b1c60f8089719a1ee5c590 Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.442601 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:00:55 crc kubenswrapper[4856]: E0126 17:00:55.442739 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:00:55.942710726 +0000 UTC m=+151.895964707 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.443136 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:55 crc kubenswrapper[4856]: E0126 17:00:55.444808 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:00:55.944794147 +0000 UTC m=+151.898048128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.461580 4856 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-lndnt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 26 17:00:55 crc kubenswrapper[4856]: W0126 17:00:55.461749 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod550752e4_a1d9_46f4_9118_9e9919b2fe6b.slice/crio-582a04946abdbc0330959e567196855ede2ec9ec2977ecc42023606fb6a2ddfe WatchSource:0}: Error finding container 582a04946abdbc0330959e567196855ede2ec9ec2977ecc42023606fb6a2ddfe: Status 404 returned error can't find the container with id 582a04946abdbc0330959e567196855ede2ec9ec2977ecc42023606fb6a2ddfe Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.462905 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-lndnt" podUID="1afc0f4c-e02d-4a70-aaba-e761e8c04eee" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.545647 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:00:55 crc kubenswrapper[4856]: E0126 17:00:55.545939 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:00:56.045902108 +0000 UTC m=+151.999156089 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.546312 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:55 crc kubenswrapper[4856]: E0126 17:00:55.546689 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:00:56.046677651 +0000 UTC m=+151.999931632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.907390 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:00:55 crc kubenswrapper[4856]: E0126 17:00:55.907846 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:00:56.407824291 +0000 UTC m=+152.361078272 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.960487 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l9nqd"] Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.960577 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nn46h"] Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.971314 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-ddghz"] Jan 26 17:00:55 crc kubenswrapper[4856]: I0126 17:00:55.978566 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k662z"] Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.000446 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrhjv"] Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.008653 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:56 crc kubenswrapper[4856]: E0126 17:00:56.009065 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:00:56.509046616 +0000 UTC m=+152.462300597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:56 crc kubenswrapper[4856]: W0126 17:00:56.015898 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbeb6f283_75cb_4184_b985_4e6c095feca1.slice/crio-e86e9d9dead0ba26d06a2376812c403d4aa6815fc1e8e161b932b21c3de11e00 WatchSource:0}: Error finding container e86e9d9dead0ba26d06a2376812c403d4aa6815fc1e8e161b932b21c3de11e00: Status 404 returned error can't find the container with id e86e9d9dead0ba26d06a2376812c403d4aa6815fc1e8e161b932b21c3de11e00 Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.039285 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wvttb"] Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.155971 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:00:56 crc kubenswrapper[4856]: E0126 17:00:56.156463 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:00:56.656442753 +0000 UTC m=+152.609696734 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.194086 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490780-8q6q4"] Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.197569 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-2sfhr"] Jan 26 17:00:56 crc kubenswrapper[4856]: W0126 17:00:56.208690 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4d83db5_776f_4e95_a6fa_b194344f9819.slice/crio-0a33ee036611e9639534d8e4446ef3e1eb207d3627ffb8748d14c0cdf8b384ce WatchSource:0}: Error finding container 0a33ee036611e9639534d8e4446ef3e1eb207d3627ffb8748d14c0cdf8b384ce: Status 404 returned error can't find the container with id 0a33ee036611e9639534d8e4446ef3e1eb207d3627ffb8748d14c0cdf8b384ce Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.252042 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qdmxz"] Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.261845 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:56 crc kubenswrapper[4856]: E0126 17:00:56.262175 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:00:56.76216125 +0000 UTC m=+152.715415231 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.265997 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-vfm8t"] Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.292731 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-6qgnn" podStartSLOduration=122.292704791 podStartE2EDuration="2m2.292704791s" podCreationTimestamp="2026-01-26 16:58:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:00:56.290247958 +0000 UTC m=+152.243501959" watchObservedRunningTime="2026-01-26 17:00:56.292704791 +0000 UTC m=+152.245958772" Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.322757 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q7j7b" podStartSLOduration=121.322727286 podStartE2EDuration="2m1.322727286s" podCreationTimestamp="2026-01-26 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:00:56.316210144 +0000 UTC m=+152.269464135" watchObservedRunningTime="2026-01-26 17:00:56.322727286 +0000 UTC m=+152.275981277" Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.324742 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-zzxln"] Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.338836 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-fbsj7"] Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.341924 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-88lkr" podStartSLOduration=123.341904002 podStartE2EDuration="2m3.341904002s" podCreationTimestamp="2026-01-26 16:58:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:00:56.340283504 +0000 UTC m=+152.293537495" watchObservedRunningTime="2026-01-26 17:00:56.341904002 +0000 UTC m=+152.295157983" Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.360164 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-58fcz"] Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.362718 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-z7cgq"] Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.368330 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:00:56 crc kubenswrapper[4856]: E0126 17:00:56.368923 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:00:56.868903718 +0000 UTC m=+152.822157699 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.371276 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mr7cp" event={"ID":"f9b8f57e-00b9-4355-ace2-0319d320d208","Type":"ContainerStarted","Data":"e4bcd086b81f4285b2ca3164dcb88913e0abc6f6f9c8ae1d8f771b21afd48202"} Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.374562 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4w5bf" event={"ID":"0c1af7db-aa80-4cb0-a9cb-5afdf677f28c","Type":"ContainerStarted","Data":"ad19be306a94469ab15b9268231447bdbbb8283c3860820099b42bbc7e87b980"} Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.376394 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-jdjcq" event={"ID":"a7e5d16a-d45d-4b40-93bf-bfaa6be2d1c0","Type":"ContainerStarted","Data":"769ebacaf7d061c49dd62c73c9ee5eb8d4bcc193b6c7cde28085a6dd9765f5e9"} Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.382913 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-c9qdp" event={"ID":"437b5573-a342-4383-ba60-be0e3ccba839","Type":"ContainerStarted","Data":"ec0e040a6eff7a45d3bb88143a951575d5e006f1f8e3c81af1602d243e444da4"} Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.386814 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-jdjcq" podStartSLOduration=123.386794866 podStartE2EDuration="2m3.386794866s" podCreationTimestamp="2026-01-26 16:58:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:00:56.385897949 +0000 UTC m=+152.339151950" watchObservedRunningTime="2026-01-26 17:00:56.386794866 +0000 UTC m=+152.340048847" Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.388199 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lndnt" event={"ID":"1afc0f4c-e02d-4a70-aaba-e761e8c04eee","Type":"ContainerStarted","Data":"e9e54e2a4a2266ca4148b11cb38df08f87c2f2ccd87dc3343d147862786c16e2"} Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.392737 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-dgcqn" event={"ID":"ac10f013-cd1f-47e0-8f1c-5ff4e6e75784","Type":"ContainerStarted","Data":"b81ebf6bd6b806c8fa7a6e0ba1caa582537dbc1e86b1c60f8089719a1ee5c590"} Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.396914 4856 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-lndnt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.396977 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-lndnt" podUID="1afc0f4c-e02d-4a70-aaba-e761e8c04eee" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.409861 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-lndnt" podStartSLOduration=122.409844735 podStartE2EDuration="2m2.409844735s" podCreationTimestamp="2026-01-26 16:58:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:00:56.409066362 +0000 UTC m=+152.362320373" watchObservedRunningTime="2026-01-26 17:00:56.409844735 +0000 UTC m=+152.363098716" Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.410338 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-962cr" event={"ID":"54a246a2-f674-4735-b295-b56699ece95b","Type":"ContainerStarted","Data":"17f583a2b977ca8d5b98468ca5a47ddf673a2a5ac93b0240f5ad0498f9ab0f38"} Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.411626 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2sfhr" event={"ID":"a4d83db5-776f-4e95-a6fa-b194344f9819","Type":"ContainerStarted","Data":"0a33ee036611e9639534d8e4446ef3e1eb207d3627ffb8748d14c0cdf8b384ce"} Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.412576 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wvttb" event={"ID":"2d37efbf-d18f-486b-9b43-bc4d181af4ca","Type":"ContainerStarted","Data":"fff8ee4c0db342e8c666d6319a47d7101521fb44435e8030d5a5dc565b0b6c44"} Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.413491 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8m4l6" event={"ID":"004316da-16cd-49ab-b14d-282c28da6fad","Type":"ContainerStarted","Data":"b8122b481c09344feb679c573e930e25f1622bd18aee7661079598c6d3a9a8ac"} Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.414464 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdmxz" event={"ID":"37a77f41-5dbf-4842-9e77-83dc22b50f4a","Type":"ContainerStarted","Data":"af1abc94505937923c6569d8078b91b7d858b20669a62b1e4e2df83f5cf3159c"} Jan 26 17:00:56 crc kubenswrapper[4856]: W0126 17:00:56.417963 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb9fb12b_3eb8_4e94_a8cf_9eaf4703a850.slice/crio-d0b72852c2a3bf2085df9dba8e4da15ce763cef156a346cec23a9847b6fb31d9 WatchSource:0}: Error finding container d0b72852c2a3bf2085df9dba8e4da15ce763cef156a346cec23a9847b6fb31d9: Status 404 returned error can't find the container with id d0b72852c2a3bf2085df9dba8e4da15ce763cef156a346cec23a9847b6fb31d9 Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.420368 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nn46h" event={"ID":"abbeffe1-cfd5-4476-9a8e-2ab5b4869444","Type":"ContainerStarted","Data":"4db554ffddfe2417dd5add45dccd6872bfa1f92a542c1e144f3c8a956cc95996"} Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.425804 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-c9qdp" podStartSLOduration=5.425788996 podStartE2EDuration="5.425788996s" podCreationTimestamp="2026-01-26 17:00:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:00:56.423268071 +0000 UTC m=+152.376522062" watchObservedRunningTime="2026-01-26 17:00:56.425788996 +0000 UTC m=+152.379042977" Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.431017 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjzsq" event={"ID":"550752e4-a1d9-46f4-9118-9e9919b2fe6b","Type":"ContainerStarted","Data":"582a04946abdbc0330959e567196855ede2ec9ec2977ecc42023606fb6a2ddfe"} Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.439587 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-ddghz" event={"ID":"beb6f283-75cb-4184-b985-4e6c095feca1","Type":"ContainerStarted","Data":"e86e9d9dead0ba26d06a2376812c403d4aa6815fc1e8e161b932b21c3de11e00"} Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.447495 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrhjv" event={"ID":"a1546392-4a69-4b12-8d7e-97450b73b7ca","Type":"ContainerStarted","Data":"6d06091dc3fc59968137d394ab73397c735381fe527eabe19085692a3a73391f"} Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.450904 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q7j7b" event={"ID":"149e3000-35d7-47bd-83f0-00ab5e0736c2","Type":"ContainerStarted","Data":"36822ae5f9ab805c23d325fa959ca3921a74a99a02a6f3f4c0fa39de2050fd6e"} Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.452109 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-8q6q4" event={"ID":"7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7","Type":"ContainerStarted","Data":"d8fe561f33f411cab54065acf50663e1fea5f5209ab612f88976297cc920acef"} Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.461678 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l9nqd" event={"ID":"05d74105-0ecd-41ac-9001-8b21b0fd6ba4","Type":"ContainerStarted","Data":"109ffd7506918ece16ce7aa025aeab703d068dcf1cd5382a1665ed919d9947c7"} Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.466471 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k662z" event={"ID":"17a72e73-4d54-4a29-a85a-ecb1aff30d10","Type":"ContainerStarted","Data":"3cda7b649b2ddbe51156afa828e1b3ce94f39d3ac82bca94653c80e65d76c66f"} Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.470252 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:56 crc kubenswrapper[4856]: E0126 17:00:56.473489 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:00:56.973473452 +0000 UTC m=+152.926727433 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.572648 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:00:56 crc kubenswrapper[4856]: E0126 17:00:56.573723 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:00:57.073687487 +0000 UTC m=+153.026941478 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.675104 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:56 crc kubenswrapper[4856]: E0126 17:00:56.675486 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:00:57.175471289 +0000 UTC m=+153.128725270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.775954 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:00:56 crc kubenswrapper[4856]: E0126 17:00:56.776130 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:00:57.276106556 +0000 UTC m=+153.229360537 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.776283 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:56 crc kubenswrapper[4856]: E0126 17:00:56.776615 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:00:57.276606001 +0000 UTC m=+153.229859982 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.877812 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:00:56 crc kubenswrapper[4856]: E0126 17:00:56.878218 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:00:57.378179696 +0000 UTC m=+153.331433677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.950390 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 17:00:56 crc kubenswrapper[4856]: I0126 17:00:56.980045 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:56 crc kubenswrapper[4856]: E0126 17:00:56.980465 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:00:57.480451962 +0000 UTC m=+153.433705943 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:57 crc kubenswrapper[4856]: I0126 17:00:57.082702 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:00:57 crc kubenswrapper[4856]: E0126 17:00:57.082928 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:00:57.582892173 +0000 UTC m=+153.536146154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:57 crc kubenswrapper[4856]: I0126 17:00:57.084931 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:57 crc kubenswrapper[4856]: E0126 17:00:57.085430 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:00:57.585419358 +0000 UTC m=+153.538673419 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:57 crc kubenswrapper[4856]: I0126 17:00:57.186500 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:00:57 crc kubenswrapper[4856]: E0126 17:00:57.186832 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:00:57.686789737 +0000 UTC m=+153.640043728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:57 crc kubenswrapper[4856]: I0126 17:00:57.187071 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:57 crc kubenswrapper[4856]: E0126 17:00:57.187713 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:00:57.687687513 +0000 UTC m=+153.640941494 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:57 crc kubenswrapper[4856]: I0126 17:00:57.290615 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:00:57 crc kubenswrapper[4856]: E0126 17:00:57.290885 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:00:57.790824315 +0000 UTC m=+153.744078296 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:57 crc kubenswrapper[4856]: I0126 17:00:57.291198 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:57 crc kubenswrapper[4856]: E0126 17:00:57.291580 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:00:57.791564447 +0000 UTC m=+153.744818428 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:57 crc kubenswrapper[4856]: I0126 17:00:57.392755 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:00:57 crc kubenswrapper[4856]: E0126 17:00:57.393102 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:00:57.89306607 +0000 UTC m=+153.846320061 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:57 crc kubenswrapper[4856]: I0126 17:00:57.393358 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:57 crc kubenswrapper[4856]: E0126 17:00:57.393773 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:00:57.89374421 +0000 UTC m=+153.846998191 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:57 crc kubenswrapper[4856]: I0126 17:00:57.472184 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-fbsj7" event={"ID":"cb9fb12b-3eb8-4e94-a8cf-9eaf4703a850","Type":"ContainerStarted","Data":"d0b72852c2a3bf2085df9dba8e4da15ce763cef156a346cec23a9847b6fb31d9"} Jan 26 17:00:57 crc kubenswrapper[4856]: I0126 17:00:57.473497 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-27vjc" event={"ID":"5c244eff-aada-44f3-b250-96878a3400c4","Type":"ContainerStarted","Data":"9ba6ea817a3c31b0e39f2111b57da282eb2b77c13e0ffcfcc27bac01ba4cc371"} Jan 26 17:00:57 crc kubenswrapper[4856]: I0126 17:00:57.474390 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-z7cgq" event={"ID":"113d2266-0e67-4e79-8a17-1a78cb9a13d5","Type":"ContainerStarted","Data":"edbf7d4a874910b774b68f7bbe85478e92a569aff5448e9bbe0b693dd010ecbd"} Jan 26 17:00:57 crc kubenswrapper[4856]: I0126 17:00:57.475678 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-58fcz" event={"ID":"73de6ef2-e139-4185-9f56-9db885734ffe","Type":"ContainerStarted","Data":"7b62cf3316521561ce0b38913d1a2fee49f3f67652607f4d7ede0d1bd6bdef3f"} Jan 26 17:00:57 crc kubenswrapper[4856]: I0126 17:00:57.476456 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-vfm8t" event={"ID":"c8657575-cd22-4ebc-ae9d-4174366985d3","Type":"ContainerStarted","Data":"f5c708af576ac7ab49284d96d259123abba4d963b1a82c6a53c2a3bf5852203e"} Jan 26 17:00:57 crc kubenswrapper[4856]: I0126 17:00:57.477439 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zzxln" event={"ID":"129a0b30-7132-4e3c-ab84-208cae7cb2f2","Type":"ContainerStarted","Data":"610adfd4db40083b842818daf5fbf443f006512d8a12c35a52868753c3132f1e"} Jan 26 17:00:57 crc kubenswrapper[4856]: I0126 17:00:57.478839 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" event={"ID":"a6d331bd-2db3-4319-9f5c-db56d408d9e3","Type":"ContainerStarted","Data":"c693764cd154e71eb16d8ca854be70839609355fd70555fd1faebdfa8a4e3e40"} Jan 26 17:00:57 crc kubenswrapper[4856]: I0126 17:00:57.480262 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6snv6" event={"ID":"fa11789e-7a2a-4dbf-85ca-c20a9d64a1f4","Type":"ContainerStarted","Data":"70c82482161b3663e98d8206f317df90be3d8ee251bd8ac00e9b981b55d6156b"} Jan 26 17:00:57 crc kubenswrapper[4856]: I0126 17:00:57.481054 4856 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-lndnt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 26 17:00:57 crc kubenswrapper[4856]: I0126 17:00:57.481095 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-lndnt" podUID="1afc0f4c-e02d-4a70-aaba-e761e8c04eee" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 26 17:00:57 crc kubenswrapper[4856]: I0126 17:00:57.494594 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:00:57 crc kubenswrapper[4856]: E0126 17:00:57.494718 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:00:57.994692797 +0000 UTC m=+153.947946788 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:57 crc kubenswrapper[4856]: I0126 17:00:57.494887 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:57 crc kubenswrapper[4856]: E0126 17:00:57.495330 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:00:57.995310955 +0000 UTC m=+153.948564936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:57 crc kubenswrapper[4856]: I0126 17:00:57.595680 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:00:57 crc kubenswrapper[4856]: E0126 17:00:57.595959 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:00:58.095931522 +0000 UTC m=+154.049185503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:57 crc kubenswrapper[4856]: I0126 17:00:57.596423 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:57 crc kubenswrapper[4856]: E0126 17:00:57.603210 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:00:58.103142345 +0000 UTC m=+154.056396326 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:57 crc kubenswrapper[4856]: I0126 17:00:57.697512 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:00:57 crc kubenswrapper[4856]: E0126 17:00:57.698088 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:00:58.198068674 +0000 UTC m=+154.151322655 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:57 crc kubenswrapper[4856]: I0126 17:00:57.799510 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:57 crc kubenswrapper[4856]: E0126 17:00:57.800171 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:00:58.300140174 +0000 UTC m=+154.253394335 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:57 crc kubenswrapper[4856]: I0126 17:00:57.900856 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:00:57 crc kubenswrapper[4856]: E0126 17:00:57.901229 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:00:58.401196835 +0000 UTC m=+154.354450826 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:58 crc kubenswrapper[4856]: I0126 17:00:58.002380 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:58 crc kubenswrapper[4856]: E0126 17:00:58.002885 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:00:58.502864433 +0000 UTC m=+154.456118414 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:58 crc kubenswrapper[4856]: I0126 17:00:58.104946 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:00:58 crc kubenswrapper[4856]: E0126 17:00:58.105288 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:00:58.605259702 +0000 UTC m=+154.558513713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:58 crc kubenswrapper[4856]: I0126 17:00:58.206946 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:58 crc kubenswrapper[4856]: E0126 17:00:58.207418 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:00:58.707397834 +0000 UTC m=+154.660651815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:58 crc kubenswrapper[4856]: I0126 17:00:58.308184 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:00:58 crc kubenswrapper[4856]: E0126 17:00:58.308436 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:00:58.808404373 +0000 UTC m=+154.761658354 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:58 crc kubenswrapper[4856]: I0126 17:00:58.308619 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:58 crc kubenswrapper[4856]: E0126 17:00:58.309025 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:00:58.809008621 +0000 UTC m=+154.762262602 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:58 crc kubenswrapper[4856]: I0126 17:00:58.409368 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:00:58 crc kubenswrapper[4856]: E0126 17:00:58.409861 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:00:58.909824204 +0000 UTC m=+154.863078195 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:58 crc kubenswrapper[4856]: I0126 17:00:58.410057 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:58 crc kubenswrapper[4856]: E0126 17:00:58.410500 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:00:58.910483683 +0000 UTC m=+154.863737724 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:58 crc kubenswrapper[4856]: I0126 17:00:58.519573 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:00:58 crc kubenswrapper[4856]: E0126 17:00:58.519975 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:00:59.019957161 +0000 UTC m=+154.973211142 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:58 crc kubenswrapper[4856]: I0126 17:00:58.638904 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:58 crc kubenswrapper[4856]: E0126 17:00:58.639810 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:00:59.139785495 +0000 UTC m=+155.093039476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:58 crc kubenswrapper[4856]: I0126 17:00:58.678452 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cghs" event={"ID":"81c2f96b-55e0-483b-b72c-df7e156e9218","Type":"ContainerStarted","Data":"23daacc6ab399456fe30d31494993067d0f18366b56b87411ba96377ebdd2807"} Jan 26 17:00:58 crc kubenswrapper[4856]: I0126 17:00:58.680354 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" event={"ID":"69008ed1-f3e5-400d-852f-adbcd94199f6","Type":"ContainerStarted","Data":"749ef964d6b168f431c27d0286b92e40d64a8b4fb99f430b33432827ee871fc9"} Jan 26 17:00:58 crc kubenswrapper[4856]: I0126 17:00:58.681132 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:00:58 crc kubenswrapper[4856]: I0126 17:00:58.692004 4856 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-cb8nk container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Jan 26 17:00:58 crc kubenswrapper[4856]: I0126 17:00:58.692075 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" podUID="69008ed1-f3e5-400d-852f-adbcd94199f6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Jan 26 17:00:58 crc kubenswrapper[4856]: I0126 17:00:58.694445 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5bjl7" event={"ID":"2ba3cf6a-a6be-4108-a155-c8bb530aa037","Type":"ContainerStarted","Data":"7696af9bf5eb7a27c45bc9a500fea17921f66464546a3b193d1abfd56ccd50c4"} Jan 26 17:00:58 crc kubenswrapper[4856]: I0126 17:00:58.742311 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:00:58 crc kubenswrapper[4856]: E0126 17:00:58.742889 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:00:59.242862255 +0000 UTC m=+155.196116246 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:58 crc kubenswrapper[4856]: I0126 17:00:58.755455 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cl895" event={"ID":"42c0e428-821f-45a1-85a7-54ebdb81ef1c","Type":"ContainerStarted","Data":"95b79a7170c279e8c4d88caa46ddb2c7a788acc3cffb03a4d607b68a5fd42fc4"} Jan 26 17:00:58 crc kubenswrapper[4856]: I0126 17:00:58.775853 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4w5bf" event={"ID":"0c1af7db-aa80-4cb0-a9cb-5afdf677f28c","Type":"ContainerStarted","Data":"3609c7e69071336f3f98579608ca0b0b398d8bcf3cebb1a6686cd170813984bf"} Jan 26 17:00:58 crc kubenswrapper[4856]: I0126 17:00:58.844479 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:58 crc kubenswrapper[4856]: E0126 17:00:58.873363 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:00:59.373326212 +0000 UTC m=+155.326580193 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:58 crc kubenswrapper[4856]: I0126 17:00:58.926117 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" podStartSLOduration=125.926086918 podStartE2EDuration="2m5.926086918s" podCreationTimestamp="2026-01-26 16:58:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:00:58.714000214 +0000 UTC m=+154.667254205" watchObservedRunningTime="2026-01-26 17:00:58.926086918 +0000 UTC m=+154.879340899" Jan 26 17:00:58 crc kubenswrapper[4856]: I0126 17:00:58.927291 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cl895" podStartSLOduration=123.927284453 podStartE2EDuration="2m3.927284453s" podCreationTimestamp="2026-01-26 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:00:58.925266964 +0000 UTC m=+154.878520955" watchObservedRunningTime="2026-01-26 17:00:58.927284453 +0000 UTC m=+154.880538434" Jan 26 17:00:58 crc kubenswrapper[4856]: I0126 17:00:58.946650 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:00:58 crc kubenswrapper[4856]: E0126 17:00:58.947024 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:00:59.446989555 +0000 UTC m=+155.400243536 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:58 crc kubenswrapper[4856]: I0126 17:00:58.959519 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-7xb2b" event={"ID":"77a97acb-2908-48fb-8bcd-0647f3e90160","Type":"ContainerStarted","Data":"0a57b717a646c106b6a7d1b2a3fe85a7d08effd05da3065545df906392710c90"} Jan 26 17:00:58 crc kubenswrapper[4856]: I0126 17:00:58.961254 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7p5jt" event={"ID":"bf1f11c8-17b8-49b7-b12d-92891f478222","Type":"ContainerStarted","Data":"08d92ff7b7c49b1fb5dee4e1807177a3ef5855778282ac9d1949c687b64826ac"} Jan 26 17:00:58 crc kubenswrapper[4856]: I0126 17:00:58.965919 4856 generic.go:334] "Generic (PLEG): container finished" podID="a6d331bd-2db3-4319-9f5c-db56d408d9e3" containerID="c693764cd154e71eb16d8ca854be70839609355fd70555fd1faebdfa8a4e3e40" exitCode=0 Jan 26 17:00:58 crc kubenswrapper[4856]: I0126 17:00:58.965988 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" event={"ID":"a6d331bd-2db3-4319-9f5c-db56d408d9e3","Type":"ContainerDied","Data":"c693764cd154e71eb16d8ca854be70839609355fd70555fd1faebdfa8a4e3e40"} Jan 26 17:00:59 crc kubenswrapper[4856]: I0126 17:00:59.044641 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6snv6" podStartSLOduration=124.044622524 podStartE2EDuration="2m4.044622524s" podCreationTimestamp="2026-01-26 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:00:59.042852732 +0000 UTC m=+154.996106713" watchObservedRunningTime="2026-01-26 17:00:59.044622524 +0000 UTC m=+154.997876505" Jan 26 17:00:59 crc kubenswrapper[4856]: I0126 17:00:59.050933 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:59 crc kubenswrapper[4856]: E0126 17:00:59.054835 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:00:59.554808744 +0000 UTC m=+155.508062885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:59 crc kubenswrapper[4856]: I0126 17:00:59.181171 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:00:59 crc kubenswrapper[4856]: E0126 17:00:59.182118 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:00:59.682097377 +0000 UTC m=+155.635351368 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:59 crc kubenswrapper[4856]: I0126 17:00:59.339748 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:59 crc kubenswrapper[4856]: E0126 17:00:59.340303 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:00:59.840278902 +0000 UTC m=+155.793532883 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:59 crc kubenswrapper[4856]: I0126 17:00:59.351649 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mr7cp" podStartSLOduration=124.351629636 podStartE2EDuration="2m4.351629636s" podCreationTimestamp="2026-01-26 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:00:59.351047949 +0000 UTC m=+155.304301950" watchObservedRunningTime="2026-01-26 17:00:59.351629636 +0000 UTC m=+155.304883637" Jan 26 17:00:59 crc kubenswrapper[4856]: I0126 17:00:59.441476 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:00:59 crc kubenswrapper[4856]: E0126 17:00:59.441999 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:00:59.94196887 +0000 UTC m=+155.895222861 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:59 crc kubenswrapper[4856]: I0126 17:00:59.559276 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:59 crc kubenswrapper[4856]: E0126 17:00:59.559898 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:00.059885268 +0000 UTC m=+156.013139249 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:59 crc kubenswrapper[4856]: I0126 17:00:59.733148 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:00:59 crc kubenswrapper[4856]: E0126 17:00:59.733940 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:00.233891039 +0000 UTC m=+156.187145040 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:00:59 crc kubenswrapper[4856]: I0126 17:00:59.837545 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:00:59 crc kubenswrapper[4856]: E0126 17:00:59.838174 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:00.338149904 +0000 UTC m=+156.291403885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:00 crc kubenswrapper[4856]: I0126 17:01:00.052151 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:00 crc kubenswrapper[4856]: E0126 17:01:00.052345 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:00.55232351 +0000 UTC m=+156.505577491 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:00 crc kubenswrapper[4856]: I0126 17:01:00.052894 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:00 crc kubenswrapper[4856]: E0126 17:01:00.053256 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:00.553244627 +0000 UTC m=+156.506498608 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:00 crc kubenswrapper[4856]: I0126 17:01:00.379626 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:00 crc kubenswrapper[4856]: E0126 17:01:00.380100 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:00.880079035 +0000 UTC m=+156.833333016 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:00 crc kubenswrapper[4856]: I0126 17:01:00.427171 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-dgcqn" event={"ID":"ac10f013-cd1f-47e0-8f1c-5ff4e6e75784","Type":"ContainerStarted","Data":"babaad9608eea9b6b1dc555762da5c4716f4dc2f9aefbe5d18813e355824a597"} Jan 26 17:01:00 crc kubenswrapper[4856]: I0126 17:01:00.434466 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vmwvg" event={"ID":"ddc2e6b7-5582-4579-bf2c-ed165b74c91a","Type":"ContainerStarted","Data":"8c85eabf2567394a69674e3e201bd137e5b390af6c44f28f3472f530c44ae4b3"} Jan 26 17:01:00 crc kubenswrapper[4856]: I0126 17:01:00.444058 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-fbsj7" event={"ID":"cb9fb12b-3eb8-4e94-a8cf-9eaf4703a850","Type":"ContainerStarted","Data":"e9c57c3116c91bf8686961596571d2f04ec3432446e43013955f1874993811ea"} Jan 26 17:01:00 crc kubenswrapper[4856]: I0126 17:01:00.453311 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k662z" event={"ID":"17a72e73-4d54-4a29-a85a-ecb1aff30d10","Type":"ContainerStarted","Data":"f3f66e19dd32d40a6256207f49ffc3a9d666ce278b05bb4ef515b5be01f0d91a"} Jan 26 17:01:00 crc kubenswrapper[4856]: I0126 17:01:00.454614 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k662z" Jan 26 17:01:00 crc kubenswrapper[4856]: I0126 17:01:00.465231 4856 generic.go:334] "Generic (PLEG): container finished" podID="81c2f96b-55e0-483b-b72c-df7e156e9218" containerID="23daacc6ab399456fe30d31494993067d0f18366b56b87411ba96377ebdd2807" exitCode=0 Jan 26 17:01:00 crc kubenswrapper[4856]: I0126 17:01:00.465569 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cghs" event={"ID":"81c2f96b-55e0-483b-b72c-df7e156e9218","Type":"ContainerDied","Data":"23daacc6ab399456fe30d31494993067d0f18366b56b87411ba96377ebdd2807"} Jan 26 17:01:00 crc kubenswrapper[4856]: I0126 17:01:00.470039 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-4pbj2" event={"ID":"831dc87e-8e14-43d3-a36e-dc7679041ae5","Type":"ContainerStarted","Data":"21bcecc04eac77160f1042631d12a9435ed2f19641a1d340f22c788470d2db4c"} Jan 26 17:01:00 crc kubenswrapper[4856]: I0126 17:01:00.471017 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-4pbj2" Jan 26 17:01:00 crc kubenswrapper[4856]: I0126 17:01:00.472833 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-962cr" event={"ID":"54a246a2-f674-4735-b295-b56699ece95b","Type":"ContainerStarted","Data":"81e3228ea3cb268d00e067cb20424ca316ff00a8dc2ce42ec5ac2ddb0165a7c2"} Jan 26 17:01:00 crc kubenswrapper[4856]: I0126 17:01:00.474596 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wvttb" event={"ID":"2d37efbf-d18f-486b-9b43-bc4d181af4ca","Type":"ContainerStarted","Data":"743ebe09ef635c21a62370a80c15b76e3ff5e7e1801bb955f28ed30f848dcca9"} Jan 26 17:01:00 crc kubenswrapper[4856]: I0126 17:01:00.475407 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-wvttb" Jan 26 17:01:00 crc kubenswrapper[4856]: I0126 17:01:00.476896 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrhjv" event={"ID":"a1546392-4a69-4b12-8d7e-97450b73b7ca","Type":"ContainerStarted","Data":"6732b3a88a90b26d6f17e46917c6b34079eb5e0dff612eb539aca34c8e37ca6c"} Jan 26 17:01:00 crc kubenswrapper[4856]: I0126 17:01:00.478334 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8m4l6" event={"ID":"004316da-16cd-49ab-b14d-282c28da6fad","Type":"ContainerStarted","Data":"9c3a62568b04641da092b2d90197c2236c1d178b7784315aa24e6f7ee39206ab"} Jan 26 17:01:00 crc kubenswrapper[4856]: I0126 17:01:00.479413 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zzxln" event={"ID":"129a0b30-7132-4e3c-ab84-208cae7cb2f2","Type":"ContainerStarted","Data":"c6ca70c17dd9f8d67fa1f3c69f2f9036d5c439cdefe6f738ef00b96ed19e3bb5"} Jan 26 17:01:00 crc kubenswrapper[4856]: I0126 17:01:00.480910 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:00 crc kubenswrapper[4856]: E0126 17:01:00.481646 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:00.98163303 +0000 UTC m=+156.934887011 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:00 crc kubenswrapper[4856]: I0126 17:01:00.482864 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-58fcz" event={"ID":"73de6ef2-e139-4185-9f56-9db885734ffe","Type":"ContainerStarted","Data":"1f14adac5cebdf92a2acc8ff852bdc7f82dfe245accd2081fd775ca85d151340"} Jan 26 17:01:00 crc kubenswrapper[4856]: I0126 17:01:00.601339 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:00 crc kubenswrapper[4856]: E0126 17:01:00.601655 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:01.101613378 +0000 UTC m=+157.054867359 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:00 crc kubenswrapper[4856]: I0126 17:01:00.603625 4856 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-wvttb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Jan 26 17:01:00 crc kubenswrapper[4856]: I0126 17:01:00.603655 4856 patch_prober.go:28] interesting pod/console-operator-58897d9998-4pbj2 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 26 17:01:00 crc kubenswrapper[4856]: I0126 17:01:00.603696 4856 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-k662z container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 26 17:01:00 crc kubenswrapper[4856]: I0126 17:01:00.603738 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-4pbj2" podUID="831dc87e-8e14-43d3-a36e-dc7679041ae5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 26 17:01:00 crc kubenswrapper[4856]: I0126 17:01:00.603766 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k662z" podUID="17a72e73-4d54-4a29-a85a-ecb1aff30d10" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 26 17:01:00 crc kubenswrapper[4856]: I0126 17:01:00.603671 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-wvttb" podUID="2d37efbf-d18f-486b-9b43-bc4d181af4ca" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Jan 26 17:01:00 crc kubenswrapper[4856]: I0126 17:01:00.677156 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-8q6q4" event={"ID":"7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7","Type":"ContainerStarted","Data":"655c350d2621ac99cae47d6117abe996be96564e1734dccd0a74e6f8446d8e6d"} Jan 26 17:01:00 crc kubenswrapper[4856]: I0126 17:01:00.679163 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fpqvc" event={"ID":"5fe6baed-ab97-4d8a-8be2-6f00f9698136","Type":"ContainerStarted","Data":"56133c3e036efeb9590dc043f9b9af766fce603e2c50cfdca46be37466b88f62"} Jan 26 17:01:00 crc kubenswrapper[4856]: I0126 17:01:00.679950 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fpqvc" Jan 26 17:01:00 crc kubenswrapper[4856]: I0126 17:01:00.698371 4856 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-fpqvc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 26 17:01:00 crc kubenswrapper[4856]: I0126 17:01:00.698423 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fpqvc" podUID="5fe6baed-ab97-4d8a-8be2-6f00f9698136" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 26 17:01:00 crc kubenswrapper[4856]: I0126 17:01:00.700429 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-z7cgq" event={"ID":"113d2266-0e67-4e79-8a17-1a78cb9a13d5","Type":"ContainerStarted","Data":"4d819592d378e2681ee37d59708ad732f047e864429c81bd39834ba8340ed07b"} Jan 26 17:01:00 crc kubenswrapper[4856]: I0126 17:01:00.703223 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:00 crc kubenswrapper[4856]: E0126 17:01:00.705038 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:01.205018607 +0000 UTC m=+157.158272668 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:00 crc kubenswrapper[4856]: I0126 17:01:00.708122 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-h9b2g" event={"ID":"85f05bd5-ff83-4d29-9531-ab3499088095","Type":"ContainerStarted","Data":"a104ae01c9b404438ac5f68d31f78d35be23d15f7f55bc2db6e35373a2bd7220"} Jan 26 17:01:00 crc kubenswrapper[4856]: I0126 17:01:00.960733 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:00 crc kubenswrapper[4856]: E0126 17:01:00.962625 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:01.462580813 +0000 UTC m=+157.415834824 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:00 crc kubenswrapper[4856]: I0126 17:01:00.963466 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nn46h" event={"ID":"abbeffe1-cfd5-4476-9a8e-2ab5b4869444","Type":"ContainerStarted","Data":"c45386474fc1f7418b5e10777280d9281617bf3a100ed762dd0a20d022c0961d"} Jan 26 17:01:00 crc kubenswrapper[4856]: I0126 17:01:00.963583 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nn46h" Jan 26 17:01:00 crc kubenswrapper[4856]: I0126 17:01:00.965775 4856 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-nn46h container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 26 17:01:00 crc kubenswrapper[4856]: I0126 17:01:00.965834 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nn46h" podUID="abbeffe1-cfd5-4476-9a8e-2ab5b4869444" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 26 17:01:01 crc kubenswrapper[4856]: I0126 17:01:01.409948 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:01 crc kubenswrapper[4856]: I0126 17:01:01.410653 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-h9b2g" Jan 26 17:01:01 crc kubenswrapper[4856]: E0126 17:01:01.412007 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:01.911990926 +0000 UTC m=+157.865244977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:01 crc kubenswrapper[4856]: I0126 17:01:01.412784 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vmwvg" podStartSLOduration=126.412742588 podStartE2EDuration="2m6.412742588s" podCreationTimestamp="2026-01-26 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:01:01.053598217 +0000 UTC m=+157.006852218" watchObservedRunningTime="2026-01-26 17:01:01.412742588 +0000 UTC m=+157.365996569" Jan 26 17:01:01 crc kubenswrapper[4856]: I0126 17:01:01.418370 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-27vjc" podStartSLOduration=127.418350103 podStartE2EDuration="2m7.418350103s" podCreationTimestamp="2026-01-26 16:58:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:00:59.385542516 +0000 UTC m=+155.338796517" watchObservedRunningTime="2026-01-26 17:01:01.418350103 +0000 UTC m=+157.371604084" Jan 26 17:01:01 crc kubenswrapper[4856]: I0126 17:01:01.418903 4856 patch_prober.go:28] interesting pod/router-default-5444994796-h9b2g container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 26 17:01:01 crc kubenswrapper[4856]: I0126 17:01:01.418998 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9b2g" podUID="85f05bd5-ff83-4d29-9531-ab3499088095" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 26 17:01:01 crc kubenswrapper[4856]: I0126 17:01:01.441722 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" event={"ID":"a6d331bd-2db3-4319-9f5c-db56d408d9e3","Type":"ContainerStarted","Data":"2e820399efdd76f35845a11fcb687223c7841ebdef9836a702a54e4f2ffafb8a"} Jan 26 17:01:01 crc kubenswrapper[4856]: I0126 17:01:01.444095 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdmxz" event={"ID":"37a77f41-5dbf-4842-9e77-83dc22b50f4a","Type":"ContainerStarted","Data":"954083c25159407a3d66c9dc10b5da920dc4a8f6475aa2b216c7d3c0bc6df577"} Jan 26 17:01:01 crc kubenswrapper[4856]: I0126 17:01:01.448046 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-7l927" event={"ID":"94291fa4-24a5-499e-8143-89c8784d9284","Type":"ContainerStarted","Data":"d13c7142b05ed798c0e5b16508a221e2918021dbec60054995ac94f05ffdad09"} Jan 26 17:01:01 crc kubenswrapper[4856]: I0126 17:01:01.449823 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-7l927" Jan 26 17:01:01 crc kubenswrapper[4856]: I0126 17:01:01.449909 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-7l927 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 26 17:01:01 crc kubenswrapper[4856]: I0126 17:01:01.450034 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7l927" podUID="94291fa4-24a5-499e-8143-89c8784d9284" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 26 17:01:01 crc kubenswrapper[4856]: I0126 17:01:01.480391 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-fbsj7" podStartSLOduration=10.480365942 podStartE2EDuration="10.480365942s" podCreationTimestamp="2026-01-26 17:00:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:01:01.478013432 +0000 UTC m=+157.431267433" watchObservedRunningTime="2026-01-26 17:01:01.480365942 +0000 UTC m=+157.433619923" Jan 26 17:01:01 crc kubenswrapper[4856]: I0126 17:01:01.511008 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:01 crc kubenswrapper[4856]: E0126 17:01:01.512372 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:02.012353445 +0000 UTC m=+157.965607426 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:01 crc kubenswrapper[4856]: I0126 17:01:01.519803 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-gz7kg" event={"ID":"033cb12f-278f-431a-8104-519db9a3152f","Type":"ContainerStarted","Data":"31c5ee65e1feb6ca373ddf1aafdd94c056851ed234ddbef669b778a0963b59f1"} Jan 26 17:01:01 crc kubenswrapper[4856]: I0126 17:01:01.530512 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2sfhr" event={"ID":"a4d83db5-776f-4e95-a6fa-b194344f9819","Type":"ContainerStarted","Data":"4547c83b8750bed5b224af4ece51940397e9dc8ec7129e201ab54486dd4fd6bf"} Jan 26 17:01:01 crc kubenswrapper[4856]: I0126 17:01:01.574883 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjzsq" event={"ID":"550752e4-a1d9-46f4-9118-9e9919b2fe6b","Type":"ContainerStarted","Data":"1a543371d727ed7dc1b3d683a835ad3e767732c10f87d4c8dddf2238ff49d0f2"} Jan 26 17:01:01 crc kubenswrapper[4856]: I0126 17:01:01.588214 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l9nqd" event={"ID":"05d74105-0ecd-41ac-9001-8b21b0fd6ba4","Type":"ContainerStarted","Data":"5eb5b3a9fa9c5e2805bb24a61b4fb04fdbb3fc2f8bdab7c1025e5e9a63ac14c0"} Jan 26 17:01:01 crc kubenswrapper[4856]: I0126 17:01:01.591813 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-ddghz" event={"ID":"beb6f283-75cb-4184-b985-4e6c095feca1","Type":"ContainerStarted","Data":"c4ae6024cd1a68b28c900e4e9366d73736bbb8b2a0126f7dde2b9f9ce32cbf08"} Jan 26 17:01:01 crc kubenswrapper[4856]: I0126 17:01:01.593512 4856 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-cb8nk container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Jan 26 17:01:01 crc kubenswrapper[4856]: I0126 17:01:01.593571 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5bjl7" Jan 26 17:01:01 crc kubenswrapper[4856]: I0126 17:01:01.593567 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" podUID="69008ed1-f3e5-400d-852f-adbcd94199f6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Jan 26 17:01:01 crc kubenswrapper[4856]: I0126 17:01:01.595684 4856 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-5bjl7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 26 17:01:01 crc kubenswrapper[4856]: I0126 17:01:01.595713 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5bjl7" podUID="2ba3cf6a-a6be-4108-a155-c8bb530aa037" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 26 17:01:01 crc kubenswrapper[4856]: I0126 17:01:01.615339 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:01 crc kubenswrapper[4856]: E0126 17:01:01.616446 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:02.116427734 +0000 UTC m=+158.069681795 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:01 crc kubenswrapper[4856]: I0126 17:01:01.633665 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-wvttb" podStartSLOduration=126.633643472 podStartE2EDuration="2m6.633643472s" podCreationTimestamp="2026-01-26 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:01:01.631494679 +0000 UTC m=+157.584748670" watchObservedRunningTime="2026-01-26 17:01:01.633643472 +0000 UTC m=+157.586897463" Jan 26 17:01:01 crc kubenswrapper[4856]: I0126 17:01:01.653611 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nn46h" podStartSLOduration=126.65359404 podStartE2EDuration="2m6.65359404s" podCreationTimestamp="2026-01-26 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:01:01.652483458 +0000 UTC m=+157.605737449" watchObservedRunningTime="2026-01-26 17:01:01.65359404 +0000 UTC m=+157.606848021" Jan 26 17:01:01 crc kubenswrapper[4856]: I0126 17:01:01.679006 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-h9b2g" podStartSLOduration=127.678984069 podStartE2EDuration="2m7.678984069s" podCreationTimestamp="2026-01-26 16:58:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:01:01.676561388 +0000 UTC m=+157.629815379" watchObservedRunningTime="2026-01-26 17:01:01.678984069 +0000 UTC m=+157.632238050" Jan 26 17:01:01 crc kubenswrapper[4856]: I0126 17:01:01.715979 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:01 crc kubenswrapper[4856]: E0126 17:01:01.717589 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:02.217568767 +0000 UTC m=+158.170822748 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:01 crc kubenswrapper[4856]: I0126 17:01:01.916591 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:01 crc kubenswrapper[4856]: I0126 17:01:01.992373 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-8q6q4" podStartSLOduration=61.99235393 podStartE2EDuration="1m1.99235393s" podCreationTimestamp="2026-01-26 17:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:01:01.744889933 +0000 UTC m=+157.698143924" watchObservedRunningTime="2026-01-26 17:01:01.99235393 +0000 UTC m=+157.945607911" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:02.740770 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:02.740825 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:02.740895 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:02.746620 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:01:04 crc kubenswrapper[4856]: E0126 17:01:02.742516 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:03.24247455 +0000 UTC m=+159.195728531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:02.957683 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:02.958629 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:02.959613 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:02.743378 4856 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-5bjl7 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:02.962195 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5bjl7" podUID="2ba3cf6a-a6be-4108-a155-c8bb530aa037" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:02.743760 4856 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-5bjl7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:02.962302 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5bjl7" podUID="2ba3cf6a-a6be-4108-a155-c8bb530aa037" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:02.744890 4856 patch_prober.go:28] interesting pod/router-default-5444994796-h9b2g container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:02.962348 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9b2g" podUID="85f05bd5-ff83-4d29-9531-ab3499088095" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:02.964835 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:02.974370 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:02.976367 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:02.984977 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-962cr" podStartSLOduration=129.984956141 podStartE2EDuration="2m9.984956141s" podCreationTimestamp="2026-01-26 16:58:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:01:02.957208492 +0000 UTC m=+158.910462483" watchObservedRunningTime="2026-01-26 17:01:02.984956141 +0000 UTC m=+158.938210122" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:02.985278 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.010126 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.013910 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.027404 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.027697 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.035442 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fpqvc" podStartSLOduration=128.035413499 podStartE2EDuration="2m8.035413499s" podCreationTimestamp="2026-01-26 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:01:03.009744172 +0000 UTC m=+158.962998163" watchObservedRunningTime="2026-01-26 17:01:03.035413499 +0000 UTC m=+158.988667490" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.056319 4856 patch_prober.go:28] interesting pod/router-default-5444994796-h9b2g container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.056354 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9b2g" podUID="85f05bd5-ff83-4d29-9531-ab3499088095" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.058767 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:04 crc kubenswrapper[4856]: E0126 17:01:03.059087 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:03.559022345 +0000 UTC m=+159.512276316 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.063377 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-7xb2b" event={"ID":"77a97acb-2908-48fb-8bcd-0647f3e90160","Type":"ContainerStarted","Data":"a12bfae50af05a4fb853762d6e1356af54d9c96dca053f3311e468d2f354a5e2"} Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.066195 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdmxz" event={"ID":"37a77f41-5dbf-4842-9e77-83dc22b50f4a","Type":"ContainerStarted","Data":"1982e8f56cf36af0e2f28ffd71f4bae06c0d07c81347f58eb49cd981b71ed717"} Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.094008 4856 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-nn46h container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.094029 4856 patch_prober.go:28] interesting pod/console-operator-58897d9998-4pbj2 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.094076 4856 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-wvttb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.094083 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nn46h" podUID="abbeffe1-cfd5-4476-9a8e-2ab5b4869444" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.094097 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-4pbj2" podUID="831dc87e-8e14-43d3-a36e-dc7679041ae5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.094145 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-wvttb" podUID="2d37efbf-d18f-486b-9b43-bc4d181af4ca" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.094239 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-7l927 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.094257 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7l927" podUID="94291fa4-24a5-499e-8143-89c8784d9284" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.094294 4856 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-fpqvc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.094374 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fpqvc" podUID="5fe6baed-ab97-4d8a-8be2-6f00f9698136" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.094658 4856 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-cb8nk container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.094682 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" podUID="69008ed1-f3e5-400d-852f-adbcd94199f6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.099995 4856 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-5bjl7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.100066 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5bjl7" podUID="2ba3cf6a-a6be-4108-a155-c8bb530aa037" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.100700 4856 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-k662z container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.100766 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k662z" podUID="17a72e73-4d54-4a29-a85a-ecb1aff30d10" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.167121 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.167196 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56258114-1bee-4516-ab71-f60d15a9635d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"56258114-1bee-4516-ab71-f60d15a9635d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.167353 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56258114-1bee-4516-ab71-f60d15a9635d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"56258114-1bee-4516-ab71-f60d15a9635d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 17:01:04 crc kubenswrapper[4856]: E0126 17:01:03.167796 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:03.667777192 +0000 UTC m=+159.621031173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.176495 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrhjv" podStartSLOduration=128.176465528 podStartE2EDuration="2m8.176465528s" podCreationTimestamp="2026-01-26 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:01:03.165726851 +0000 UTC m=+159.118980852" watchObservedRunningTime="2026-01-26 17:01:03.176465528 +0000 UTC m=+159.129719549" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.222764 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.224829 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-4pbj2" podStartSLOduration=129.224805974 podStartE2EDuration="2m9.224805974s" podCreationTimestamp="2026-01-26 16:58:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:01:03.223418243 +0000 UTC m=+159.176672234" watchObservedRunningTime="2026-01-26 17:01:03.224805974 +0000 UTC m=+159.178059955" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.306952 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.307335 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56258114-1bee-4516-ab71-f60d15a9635d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"56258114-1bee-4516-ab71-f60d15a9635d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.307602 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56258114-1bee-4516-ab71-f60d15a9635d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"56258114-1bee-4516-ab71-f60d15a9635d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 17:01:04 crc kubenswrapper[4856]: E0126 17:01:03.309157 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:03.80912035 +0000 UTC m=+159.762374331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.315109 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56258114-1bee-4516-ab71-f60d15a9635d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"56258114-1bee-4516-ab71-f60d15a9635d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.354369 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k662z" podStartSLOduration=128.354344314 podStartE2EDuration="2m8.354344314s" podCreationTimestamp="2026-01-26 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:01:03.334407046 +0000 UTC m=+159.287661047" watchObservedRunningTime="2026-01-26 17:01:03.354344314 +0000 UTC m=+159.307598295" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.379479 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56258114-1bee-4516-ab71-f60d15a9635d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"56258114-1bee-4516-ab71-f60d15a9635d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.408643 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:04 crc kubenswrapper[4856]: E0126 17:01:03.410515 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:03.91049865 +0000 UTC m=+159.863752631 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.431566 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-gz7kg" podStartSLOduration=128.43154245 podStartE2EDuration="2m8.43154245s" podCreationTimestamp="2026-01-26 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:01:03.430446328 +0000 UTC m=+159.383700319" watchObservedRunningTime="2026-01-26 17:01:03.43154245 +0000 UTC m=+159.384796441" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.669898 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:04 crc kubenswrapper[4856]: E0126 17:01:03.670351 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:04.170333912 +0000 UTC m=+160.123587893 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.673556 4856 patch_prober.go:28] interesting pod/console-f9d7485db-6qgnn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.26:8443/health\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.673605 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-6qgnn" podUID="b28404ed-2e71-4b3f-9140-35ee89dbc8f2" containerName="console" probeResult="failure" output="Get \"https://10.217.0.26:8443/health\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.707909 4856 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-lndnt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.708023 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-lndnt" podUID="1afc0f4c-e02d-4a70-aaba-e761e8c04eee" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.775514 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:04 crc kubenswrapper[4856]: E0126 17:01:03.775981 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:04.275968638 +0000 UTC m=+160.229222619 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.777678 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-7xb2b" podStartSLOduration=128.777640217 podStartE2EDuration="2m8.777640217s" podCreationTimestamp="2026-01-26 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:01:03.753222607 +0000 UTC m=+159.706476588" watchObservedRunningTime="2026-01-26 17:01:03.777640217 +0000 UTC m=+159.730894198" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.812865 4856 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-fpqvc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.812918 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fpqvc" podUID="5fe6baed-ab97-4d8a-8be2-6f00f9698136" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.877929 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:04 crc kubenswrapper[4856]: E0126 17:01:03.878700 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:04.378679666 +0000 UTC m=+160.331933647 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.891988 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-7l927 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.892065 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7l927" podUID="94291fa4-24a5-499e-8143-89c8784d9284" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.892158 4856 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-cb8nk container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.892172 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" podUID="69008ed1-f3e5-400d-852f-adbcd94199f6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.892248 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-7l927 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.892269 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-7l927" podUID="94291fa4-24a5-499e-8143-89c8784d9284" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.895724 4856 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-mr7cp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:5443/healthz\": dial tcp 10.217.0.19:5443: connect: connection refused" start-of-body= Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.895763 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mr7cp" podUID="f9b8f57e-00b9-4355-ace2-0319d320d208" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.19:5443/healthz\": dial tcp 10.217.0.19:5443: connect: connection refused" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.895831 4856 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-mr7cp container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.19:5443/healthz\": dial tcp 10.217.0.19:5443: connect: connection refused" start-of-body= Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.895845 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mr7cp" podUID="f9b8f57e-00b9-4355-ace2-0319d320d208" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.19:5443/healthz\": dial tcp 10.217.0.19:5443: connect: connection refused" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.896385 4856 patch_prober.go:28] interesting pod/console-operator-58897d9998-4pbj2 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.896407 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-4pbj2" podUID="831dc87e-8e14-43d3-a36e-dc7679041ae5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.896451 4856 patch_prober.go:28] interesting pod/console-operator-58897d9998-4pbj2 container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.896467 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-4pbj2" podUID="831dc87e-8e14-43d3-a36e-dc7679041ae5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.986451 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:04 crc kubenswrapper[4856]: E0126 17:01:03.987509 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:04.487444324 +0000 UTC m=+160.440698305 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:03.990384 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7p5jt" podStartSLOduration=129.99035844 podStartE2EDuration="2m9.99035844s" podCreationTimestamp="2026-01-26 16:58:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:01:03.986218318 +0000 UTC m=+159.939472309" watchObservedRunningTime="2026-01-26 17:01:03.99035844 +0000 UTC m=+159.943612421" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.028244 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjzsq" podStartSLOduration=129.028204916 podStartE2EDuration="2m9.028204916s" podCreationTimestamp="2026-01-26 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:01:04.021410696 +0000 UTC m=+159.974664707" watchObservedRunningTime="2026-01-26 17:01:04.028204916 +0000 UTC m=+159.981458897" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.066902 4856 patch_prober.go:28] interesting pod/router-default-5444994796-h9b2g container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.067231 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9b2g" podUID="85f05bd5-ff83-4d29-9531-ab3499088095" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.091563 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:04 crc kubenswrapper[4856]: E0126 17:01:04.091849 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:04.591830442 +0000 UTC m=+160.545084423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.165482 4856 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-k662z container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.165556 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k662z" podUID="17a72e73-4d54-4a29-a85a-ecb1aff30d10" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.165472 4856 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-k662z container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.165605 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k662z" podUID="17a72e73-4d54-4a29-a85a-ecb1aff30d10" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.165870 4856 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-nn46h container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.165891 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nn46h" podUID="abbeffe1-cfd5-4476-9a8e-2ab5b4869444" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.165945 4856 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-nn46h container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.166009 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nn46h" podUID="abbeffe1-cfd5-4476-9a8e-2ab5b4869444" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.206001 4856 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-wvttb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.206168 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-wvttb" podUID="2d37efbf-d18f-486b-9b43-bc4d181af4ca" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.206271 4856 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-wvttb container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.206293 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-wvttb" podUID="2d37efbf-d18f-486b-9b43-bc4d181af4ca" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.207059 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:04 crc kubenswrapper[4856]: E0126 17:01:04.207454 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:04.707442511 +0000 UTC m=+160.660696502 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.308006 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:04 crc kubenswrapper[4856]: E0126 17:01:04.308251 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:04.808223033 +0000 UTC m=+160.761477014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.308726 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:04 crc kubenswrapper[4856]: E0126 17:01:04.309159 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:04.80911513 +0000 UTC m=+160.762369111 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.317420 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-7l927" podStartSLOduration=130.317390754 podStartE2EDuration="2m10.317390754s" podCreationTimestamp="2026-01-26 16:58:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:01:04.27385271 +0000 UTC m=+160.227106701" watchObservedRunningTime="2026-01-26 17:01:04.317390754 +0000 UTC m=+160.270644735" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.317629 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdmxz" podStartSLOduration=129.317619951 podStartE2EDuration="2m9.317619951s" podCreationTimestamp="2026-01-26 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:01:04.069291668 +0000 UTC m=+160.022545679" watchObservedRunningTime="2026-01-26 17:01:04.317619951 +0000 UTC m=+160.270873942" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.355494 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l9nqd" podStartSLOduration=130.355472097 podStartE2EDuration="2m10.355472097s" podCreationTimestamp="2026-01-26 16:58:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:01:04.325758371 +0000 UTC m=+160.279012362" watchObservedRunningTime="2026-01-26 17:01:04.355472097 +0000 UTC m=+160.308726078" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.419726 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:04 crc kubenswrapper[4856]: E0126 17:01:04.420311 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:04.920281348 +0000 UTC m=+160.873535339 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.421170 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:04 crc kubenswrapper[4856]: E0126 17:01:04.421978 4856 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.015s" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.422017 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-6qgnn" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.422050 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-z7cgq" event={"ID":"113d2266-0e67-4e79-8a17-1a78cb9a13d5","Type":"ContainerStarted","Data":"c6c4d71b01ea7a92bb160f8285dd5ca4166d7aea7bf52c195a12c0adddc54878"} Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.422079 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-6qgnn" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.422097 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zzxln" event={"ID":"129a0b30-7132-4e3c-ab84-208cae7cb2f2","Type":"ContainerStarted","Data":"aabc7c67d807fdf1f4ca5027d6328c13318343ee6b7d43d56a7f335230cc215a"} Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.422110 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mr7cp" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.422122 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-ddghz" event={"ID":"beb6f283-75cb-4184-b985-4e6c095feca1","Type":"ContainerStarted","Data":"6bdbb4d282933915ce00d8a66dcb4c0d6922b65c0b34b1d7a402740a53527a2b"} Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.422174 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-h9b2g" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.422197 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2sfhr" event={"ID":"a4d83db5-776f-4e95-a6fa-b194344f9819","Type":"ContainerStarted","Data":"4f799d007cd14db60504ef7895ca55c01499d25a3d916f2b08952f82e0ea032c"} Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.422212 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8m4l6" event={"ID":"004316da-16cd-49ab-b14d-282c28da6fad","Type":"ContainerStarted","Data":"57c6b7055d01d117511ba1b8338e885c578822b2d534f7a2171bfaf3838c3df7"} Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.422227 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8m4l6" Jan 26 17:01:04 crc kubenswrapper[4856]: E0126 17:01:04.422489 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:04.922474783 +0000 UTC m=+160.875728774 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.422712 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4w5bf" event={"ID":"0c1af7db-aa80-4cb0-a9cb-5afdf677f28c","Type":"ContainerStarted","Data":"d2920aad12d06efdb0eab5696d56fd3226d3268bbf899b8a38b443ca3cd7108f"} Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.423331 4856 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-k662z container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.423438 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k662z" podUID="17a72e73-4d54-4a29-a85a-ecb1aff30d10" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.423477 4856 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-fpqvc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.423612 4856 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-mr7cp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:5443/healthz\": dial tcp 10.217.0.19:5443: connect: connection refused" start-of-body= Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.423647 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mr7cp" podUID="f9b8f57e-00b9-4355-ace2-0319d320d208" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.19:5443/healthz\": dial tcp 10.217.0.19:5443: connect: connection refused" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.423726 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fpqvc" podUID="5fe6baed-ab97-4d8a-8be2-6f00f9698136" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.423738 4856 patch_prober.go:28] interesting pod/console-operator-58897d9998-4pbj2 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.423572 4856 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-wvttb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.423879 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-4pbj2" podUID="831dc87e-8e14-43d3-a36e-dc7679041ae5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.423911 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-wvttb" podUID="2d37efbf-d18f-486b-9b43-bc4d181af4ca" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.424721 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-7l927 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.425061 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7l927" podUID="94291fa4-24a5-499e-8143-89c8784d9284" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.530899 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:04 crc kubenswrapper[4856]: E0126 17:01:04.532122 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:05.032101056 +0000 UTC m=+160.985355037 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.637767 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:04 crc kubenswrapper[4856]: E0126 17:01:04.638504 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:05.138474422 +0000 UTC m=+161.091728443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.639945 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5bjl7" podStartSLOduration=130.639934035 podStartE2EDuration="2m10.639934035s" podCreationTimestamp="2026-01-26 16:58:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:01:04.493400294 +0000 UTC m=+160.446654295" watchObservedRunningTime="2026-01-26 17:01:04.639934035 +0000 UTC m=+160.593188036" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.642208 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2sfhr" podStartSLOduration=129.642196672 podStartE2EDuration="2m9.642196672s" podCreationTimestamp="2026-01-26 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:01:04.63703063 +0000 UTC m=+160.590284621" watchObservedRunningTime="2026-01-26 17:01:04.642196672 +0000 UTC m=+160.595450663" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.759237 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:04 crc kubenswrapper[4856]: E0126 17:01:04.759736 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:05.259707548 +0000 UTC m=+161.212961529 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.867347 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:04 crc kubenswrapper[4856]: E0126 17:01:04.867753 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:05.367737003 +0000 UTC m=+161.320990984 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.950755 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8m4l6" podStartSLOduration=129.950729531 podStartE2EDuration="2m9.950729531s" podCreationTimestamp="2026-01-26 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:01:04.86628674 +0000 UTC m=+160.819540761" watchObservedRunningTime="2026-01-26 17:01:04.950729531 +0000 UTC m=+160.903983512" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.952831 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-ddghz" podStartSLOduration=129.952821092 podStartE2EDuration="2m9.952821092s" podCreationTimestamp="2026-01-26 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:01:04.925098445 +0000 UTC m=+160.878352446" watchObservedRunningTime="2026-01-26 17:01:04.952821092 +0000 UTC m=+160.906075073" Jan 26 17:01:04 crc kubenswrapper[4856]: I0126 17:01:04.970941 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:04 crc kubenswrapper[4856]: E0126 17:01:04.971358 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:05.471322488 +0000 UTC m=+161.424576459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:05 crc kubenswrapper[4856]: I0126 17:01:05.056728 4856 patch_prober.go:28] interesting pod/router-default-5444994796-h9b2g container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 26 17:01:05 crc kubenswrapper[4856]: I0126 17:01:05.056796 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9b2g" podUID="85f05bd5-ff83-4d29-9531-ab3499088095" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 26 17:01:05 crc kubenswrapper[4856]: I0126 17:01:05.064497 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 17:01:05 crc kubenswrapper[4856]: I0126 17:01:05.072962 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:05 crc kubenswrapper[4856]: E0126 17:01:05.073915 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:05.573898293 +0000 UTC m=+161.527152274 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:05 crc kubenswrapper[4856]: I0126 17:01:05.200191 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:05 crc kubenswrapper[4856]: E0126 17:01:05.201549 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:05.701511746 +0000 UTC m=+161.654765737 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:05 crc kubenswrapper[4856]: I0126 17:01:05.262171 4856 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-5bjl7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 26 17:01:05 crc kubenswrapper[4856]: I0126 17:01:05.262331 4856 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-5bjl7 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 26 17:01:05 crc kubenswrapper[4856]: I0126 17:01:05.262384 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5bjl7" podUID="2ba3cf6a-a6be-4108-a155-c8bb530aa037" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 26 17:01:05 crc kubenswrapper[4856]: I0126 17:01:05.262466 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5bjl7" podUID="2ba3cf6a-a6be-4108-a155-c8bb530aa037" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 26 17:01:05 crc kubenswrapper[4856]: I0126 17:01:05.301831 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:05 crc kubenswrapper[4856]: E0126 17:01:05.302564 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:05.802490844 +0000 UTC m=+161.755744815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:05 crc kubenswrapper[4856]: I0126 17:01:05.502393 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:05 crc kubenswrapper[4856]: E0126 17:01:05.505670 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:06.005640485 +0000 UTC m=+161.958894466 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:05 crc kubenswrapper[4856]: I0126 17:01:05.642388 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:05 crc kubenswrapper[4856]: E0126 17:01:05.647825 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:06.147808597 +0000 UTC m=+162.101062578 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:05 crc kubenswrapper[4856]: I0126 17:01:05.655841 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4w5bf" podStartSLOduration=131.655811043 podStartE2EDuration="2m11.655811043s" podCreationTimestamp="2026-01-26 16:58:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:01:05.030816692 +0000 UTC m=+160.984070673" watchObservedRunningTime="2026-01-26 17:01:05.655811043 +0000 UTC m=+161.609065024" Jan 26 17:01:05 crc kubenswrapper[4856]: I0126 17:01:05.743123 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:05 crc kubenswrapper[4856]: E0126 17:01:05.743446 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:06.243414917 +0000 UTC m=+162.196668898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:05 crc kubenswrapper[4856]: I0126 17:01:05.895030 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:05 crc kubenswrapper[4856]: E0126 17:01:05.895468 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:06.39545241 +0000 UTC m=+162.348706391 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:06 crc kubenswrapper[4856]: I0126 17:01:06.018606 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:06 crc kubenswrapper[4856]: E0126 17:01:06.018968 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:06.518951772 +0000 UTC m=+162.472205753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:06 crc kubenswrapper[4856]: I0126 17:01:06.139629 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:06 crc kubenswrapper[4856]: E0126 17:01:06.140311 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:06.640295131 +0000 UTC m=+162.593549122 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:06 crc kubenswrapper[4856]: I0126 17:01:06.241136 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:06 crc kubenswrapper[4856]: E0126 17:01:06.241388 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:06.74137154 +0000 UTC m=+162.694625521 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:06 crc kubenswrapper[4856]: I0126 17:01:06.344342 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:06 crc kubenswrapper[4856]: E0126 17:01:06.344732 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:06.844716888 +0000 UTC m=+162.797970869 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:06 crc kubenswrapper[4856]: I0126 17:01:06.444304 4856 patch_prober.go:28] interesting pod/router-default-5444994796-h9b2g container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 26 17:01:06 crc kubenswrapper[4856]: I0126 17:01:06.444443 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9b2g" podUID="85f05bd5-ff83-4d29-9531-ab3499088095" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 26 17:01:06 crc kubenswrapper[4856]: I0126 17:01:06.444884 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:06 crc kubenswrapper[4856]: E0126 17:01:06.445713 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:06.945697836 +0000 UTC m=+162.898951807 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:06 crc kubenswrapper[4856]: I0126 17:01:06.547664 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:06 crc kubenswrapper[4856]: E0126 17:01:06.548305 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:07.048290771 +0000 UTC m=+163.001544752 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:06 crc kubenswrapper[4856]: I0126 17:01:06.699494 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:06 crc kubenswrapper[4856]: E0126 17:01:06.700105 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:07.200089288 +0000 UTC m=+163.153343269 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:06 crc kubenswrapper[4856]: I0126 17:01:06.712263 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-58fcz" event={"ID":"73de6ef2-e139-4185-9f56-9db885734ffe","Type":"ContainerStarted","Data":"c6d018030b40179bdc7061fd342e5778334c8d595e566f8dccd7909d4cdfdb65"} Jan 26 17:01:06 crc kubenswrapper[4856]: I0126 17:01:06.742251 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-dgcqn" event={"ID":"ac10f013-cd1f-47e0-8f1c-5ff4e6e75784","Type":"ContainerStarted","Data":"721b13b3a9452dc526cb9f1d2ec4056ad4445955462616f9a8c3ab38495dfb4a"} Jan 26 17:01:06 crc kubenswrapper[4856]: I0126 17:01:06.743044 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-dgcqn" Jan 26 17:01:06 crc kubenswrapper[4856]: I0126 17:01:06.743324 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"7478646e9933e69628e5ab6af89d35504a3abd1eb3586313caed673dc6a2653e"} Jan 26 17:01:06 crc kubenswrapper[4856]: I0126 17:01:06.744746 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-vfm8t" event={"ID":"c8657575-cd22-4ebc-ae9d-4174366985d3","Type":"ContainerStarted","Data":"386ffe7f255acb85e646366b8112901fa773aedab0c0512515eb85e39d1d12ef"} Jan 26 17:01:06 crc kubenswrapper[4856]: I0126 17:01:06.746832 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" event={"ID":"a6d331bd-2db3-4319-9f5c-db56d408d9e3","Type":"ContainerStarted","Data":"4d58f3b5a90464745c269a6b00df4216969791b4eff2dcd4354fd144525f157c"} Jan 26 17:01:06 crc kubenswrapper[4856]: I0126 17:01:06.818949 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cghs" event={"ID":"81c2f96b-55e0-483b-b72c-df7e156e9218","Type":"ContainerStarted","Data":"656e05b6881c7f1686e0f11444cdd88478ced84f0212c2e6645b5e4be4f15871"} Jan 26 17:01:06 crc kubenswrapper[4856]: I0126 17:01:06.822776 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:06 crc kubenswrapper[4856]: E0126 17:01:06.828134 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:07.328114723 +0000 UTC m=+163.281368704 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:07 crc kubenswrapper[4856]: I0126 17:01:07.195424 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:07 crc kubenswrapper[4856]: E0126 17:01:07.196956 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:07.69692969 +0000 UTC m=+163.650183721 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:07 crc kubenswrapper[4856]: I0126 17:01:07.199693 4856 patch_prober.go:28] interesting pod/router-default-5444994796-h9b2g container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 26 17:01:07 crc kubenswrapper[4856]: I0126 17:01:07.199753 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9b2g" podUID="85f05bd5-ff83-4d29-9531-ab3499088095" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 26 17:01:07 crc kubenswrapper[4856]: I0126 17:01:07.305495 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:07 crc kubenswrapper[4856]: E0126 17:01:07.306165 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:07.8061503 +0000 UTC m=+163.759404281 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:07 crc kubenswrapper[4856]: I0126 17:01:07.413685 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:07 crc kubenswrapper[4856]: E0126 17:01:07.414353 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:07.91431132 +0000 UTC m=+163.867565311 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:07 crc kubenswrapper[4856]: I0126 17:01:07.414793 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:07 crc kubenswrapper[4856]: E0126 17:01:07.415543 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:07.915514595 +0000 UTC m=+163.868768576 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:07 crc kubenswrapper[4856]: I0126 17:01:07.518116 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-z7cgq" podStartSLOduration=133.518097931 podStartE2EDuration="2m13.518097931s" podCreationTimestamp="2026-01-26 16:58:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:01:07.517129372 +0000 UTC m=+163.470383353" watchObservedRunningTime="2026-01-26 17:01:07.518097931 +0000 UTC m=+163.471351912" Jan 26 17:01:07 crc kubenswrapper[4856]: I0126 17:01:07.524021 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:07 crc kubenswrapper[4856]: E0126 17:01:07.524299 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:08.024256432 +0000 UTC m=+163.977510413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:07 crc kubenswrapper[4856]: I0126 17:01:07.525002 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:07 crc kubenswrapper[4856]: E0126 17:01:07.525557 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:08.02554286 +0000 UTC m=+163.978796841 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:07 crc kubenswrapper[4856]: I0126 17:01:07.559278 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 17:01:07 crc kubenswrapper[4856]: I0126 17:01:07.560022 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 17:01:07 crc kubenswrapper[4856]: I0126 17:01:07.622753 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 17:01:07 crc kubenswrapper[4856]: I0126 17:01:07.622931 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 17:01:07 crc kubenswrapper[4856]: I0126 17:01:07.626663 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:07 crc kubenswrapper[4856]: I0126 17:01:07.626985 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d8c22047-144c-402a-80c5-c206539b6826-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d8c22047-144c-402a-80c5-c206539b6826\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 17:01:07 crc kubenswrapper[4856]: I0126 17:01:07.627095 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d8c22047-144c-402a-80c5-c206539b6826-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d8c22047-144c-402a-80c5-c206539b6826\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 17:01:07 crc kubenswrapper[4856]: E0126 17:01:07.627288 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:08.12726609 +0000 UTC m=+164.080520071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:07 crc kubenswrapper[4856]: I0126 17:01:07.726928 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 17:01:07 crc kubenswrapper[4856]: I0126 17:01:07.727665 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zzxln" podStartSLOduration=132.72764835 podStartE2EDuration="2m12.72764835s" podCreationTimestamp="2026-01-26 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:01:07.704941751 +0000 UTC m=+163.658195732" watchObservedRunningTime="2026-01-26 17:01:07.72764835 +0000 UTC m=+163.680902331" Jan 26 17:01:07 crc kubenswrapper[4856]: I0126 17:01:07.728312 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d8c22047-144c-402a-80c5-c206539b6826-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d8c22047-144c-402a-80c5-c206539b6826\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 17:01:07 crc kubenswrapper[4856]: I0126 17:01:07.738852 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:07 crc kubenswrapper[4856]: I0126 17:01:07.738898 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d8c22047-144c-402a-80c5-c206539b6826-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d8c22047-144c-402a-80c5-c206539b6826\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 17:01:07 crc kubenswrapper[4856]: I0126 17:01:07.739041 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d8c22047-144c-402a-80c5-c206539b6826-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d8c22047-144c-402a-80c5-c206539b6826\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 17:01:07 crc kubenswrapper[4856]: E0126 17:01:07.739388 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:08.239372556 +0000 UTC m=+164.192626537 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:07 crc kubenswrapper[4856]: I0126 17:01:07.856047 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:07 crc kubenswrapper[4856]: E0126 17:01:07.856912 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:08.356893512 +0000 UTC m=+164.310147493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:07 crc kubenswrapper[4856]: I0126 17:01:07.896893 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d8c22047-144c-402a-80c5-c206539b6826-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d8c22047-144c-402a-80c5-c206539b6826\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 17:01:08 crc kubenswrapper[4856]: I0126 17:01:07.964706 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:08 crc kubenswrapper[4856]: E0126 17:01:07.965162 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:08.465147614 +0000 UTC m=+164.418401595 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:08 crc kubenswrapper[4856]: I0126 17:01:08.071502 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:08 crc kubenswrapper[4856]: E0126 17:01:08.072199 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:08.5721716 +0000 UTC m=+164.525425581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:08 crc kubenswrapper[4856]: I0126 17:01:08.081758 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 17:01:08 crc kubenswrapper[4856]: I0126 17:01:08.111733 4856 patch_prober.go:28] interesting pod/router-default-5444994796-h9b2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:01:08 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Jan 26 17:01:08 crc kubenswrapper[4856]: [+]process-running ok Jan 26 17:01:08 crc kubenswrapper[4856]: healthz check failed Jan 26 17:01:08 crc kubenswrapper[4856]: I0126 17:01:08.111823 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9b2g" podUID="85f05bd5-ff83-4d29-9531-ab3499088095" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:01:08 crc kubenswrapper[4856]: I0126 17:01:08.206233 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:08 crc kubenswrapper[4856]: E0126 17:01:08.206676 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:08.706663936 +0000 UTC m=+164.659917917 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:08 crc kubenswrapper[4856]: I0126 17:01:08.307305 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:08 crc kubenswrapper[4856]: E0126 17:01:08.307437 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:08.807413117 +0000 UTC m=+164.760667098 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:08 crc kubenswrapper[4856]: I0126 17:01:08.307537 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:08 crc kubenswrapper[4856]: E0126 17:01:08.307987 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:08.807970904 +0000 UTC m=+164.761224885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:08 crc kubenswrapper[4856]: I0126 17:01:08.309553 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"0b843bfa4e4f72e7f44348fbd591ae8f8a30a66e1fcd865ab9fae8f878ae08fa"} Jan 26 17:01:08 crc kubenswrapper[4856]: I0126 17:01:08.319911 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"69d2574f87ca003643e7ca59ab73844ae16eabe1203ec86211d4b7f831a4c54e"} Jan 26 17:01:08 crc kubenswrapper[4856]: I0126 17:01:08.320308 4856 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-5bjl7 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 26 17:01:08 crc kubenswrapper[4856]: I0126 17:01:08.320347 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5bjl7" podUID="2ba3cf6a-a6be-4108-a155-c8bb530aa037" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 26 17:01:08 crc kubenswrapper[4856]: I0126 17:01:08.320381 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5bjl7" Jan 26 17:01:08 crc kubenswrapper[4856]: I0126 17:01:08.320937 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"7696af9bf5eb7a27c45bc9a500fea17921f66464546a3b193d1abfd56ccd50c4"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-5bjl7" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Jan 26 17:01:08 crc kubenswrapper[4856]: I0126 17:01:08.321134 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5bjl7" podUID="2ba3cf6a-a6be-4108-a155-c8bb530aa037" containerName="openshift-config-operator" containerID="cri-o://7696af9bf5eb7a27c45bc9a500fea17921f66464546a3b193d1abfd56ccd50c4" gracePeriod=30 Jan 26 17:01:08 crc kubenswrapper[4856]: I0126 17:01:08.321340 4856 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-5bjl7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 26 17:01:08 crc kubenswrapper[4856]: I0126 17:01:08.321382 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5bjl7" podUID="2ba3cf6a-a6be-4108-a155-c8bb530aa037" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 26 17:01:08 crc kubenswrapper[4856]: I0126 17:01:08.321782 4856 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-5bjl7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 26 17:01:08 crc kubenswrapper[4856]: I0126 17:01:08.321803 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5bjl7" podUID="2ba3cf6a-a6be-4108-a155-c8bb530aa037" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 26 17:01:08 crc kubenswrapper[4856]: I0126 17:01:08.378752 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-dgcqn" podStartSLOduration=17.3787288 podStartE2EDuration="17.3787288s" podCreationTimestamp="2026-01-26 17:00:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:01:08.10641412 +0000 UTC m=+164.059668111" watchObservedRunningTime="2026-01-26 17:01:08.3787288 +0000 UTC m=+164.331982781" Jan 26 17:01:08 crc kubenswrapper[4856]: I0126 17:01:08.433772 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"108b2ba5b94983d79d93282a417b0fe8f3ba567db5dc09cf99793744d5af7e2f"} Jan 26 17:01:08 crc kubenswrapper[4856]: I0126 17:01:08.435626 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:08 crc kubenswrapper[4856]: I0126 17:01:08.453314 4856 patch_prober.go:28] interesting pod/dns-default-dgcqn container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.217.0.43:8181/ready\": dial tcp 10.217.0.43:8181: connect: connection refused" start-of-body= Jan 26 17:01:08 crc kubenswrapper[4856]: I0126 17:01:08.453424 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-dgcqn" podUID="ac10f013-cd1f-47e0-8f1c-5ff4e6e75784" containerName="dns" probeResult="failure" output="Get \"http://10.217.0.43:8181/ready\": dial tcp 10.217.0.43:8181: connect: connection refused" Jan 26 17:01:08 crc kubenswrapper[4856]: E0126 17:01:08.506428 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:09.006398845 +0000 UTC m=+164.959652826 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:08 crc kubenswrapper[4856]: I0126 17:01:08.603497 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:08 crc kubenswrapper[4856]: E0126 17:01:08.606372 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:09.106353793 +0000 UTC m=+165.059607764 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:08 crc kubenswrapper[4856]: I0126 17:01:08.699712 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" Jan 26 17:01:08 crc kubenswrapper[4856]: I0126 17:01:08.700122 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" Jan 26 17:01:08 crc kubenswrapper[4856]: I0126 17:01:08.704409 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:08 crc kubenswrapper[4856]: E0126 17:01:08.704944 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:09.20492577 +0000 UTC m=+165.158179751 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:08 crc kubenswrapper[4856]: I0126 17:01:08.723733 4856 patch_prober.go:28] interesting pod/apiserver-76f77b778f-6rlxp container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 26 17:01:08 crc kubenswrapper[4856]: I0126 17:01:08.723818 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" podUID="a6d331bd-2db3-4319-9f5c-db56d408d9e3" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 26 17:01:08 crc kubenswrapper[4856]: I0126 17:01:08.807224 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:08 crc kubenswrapper[4856]: E0126 17:01:08.807597 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:09.307583617 +0000 UTC m=+165.260837598 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:08 crc kubenswrapper[4856]: I0126 17:01:08.907041 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-58fcz" podStartSLOduration=134.907016319 podStartE2EDuration="2m14.907016319s" podCreationTimestamp="2026-01-26 16:58:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:01:08.381822242 +0000 UTC m=+164.335076223" watchObservedRunningTime="2026-01-26 17:01:08.907016319 +0000 UTC m=+164.860270300" Jan 26 17:01:08 crc kubenswrapper[4856]: I0126 17:01:08.908287 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:08 crc kubenswrapper[4856]: E0126 17:01:08.908735 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:09.408717719 +0000 UTC m=+165.361971700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:08 crc kubenswrapper[4856]: I0126 17:01:08.908759 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 17:01:09 crc kubenswrapper[4856]: I0126 17:01:09.017351 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:09 crc kubenswrapper[4856]: E0126 17:01:09.017904 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:09.517891789 +0000 UTC m=+165.471145770 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:09 crc kubenswrapper[4856]: I0126 17:01:09.094633 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cghs" Jan 26 17:01:09 crc kubenswrapper[4856]: I0126 17:01:09.094998 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cghs" Jan 26 17:01:09 crc kubenswrapper[4856]: I0126 17:01:09.103917 4856 patch_prober.go:28] interesting pod/router-default-5444994796-h9b2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:01:09 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Jan 26 17:01:09 crc kubenswrapper[4856]: [+]process-running ok Jan 26 17:01:09 crc kubenswrapper[4856]: healthz check failed Jan 26 17:01:09 crc kubenswrapper[4856]: I0126 17:01:09.103983 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9b2g" podUID="85f05bd5-ff83-4d29-9531-ab3499088095" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:01:09 crc kubenswrapper[4856]: I0126 17:01:09.115832 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-dgcqn" Jan 26 17:01:09 crc kubenswrapper[4856]: I0126 17:01:09.121102 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:09 crc kubenswrapper[4856]: E0126 17:01:09.121388 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:09.621372021 +0000 UTC m=+165.574626002 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:09 crc kubenswrapper[4856]: I0126 17:01:09.121816 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:09 crc kubenswrapper[4856]: E0126 17:01:09.122128 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:09.622121833 +0000 UTC m=+165.575375804 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:09 crc kubenswrapper[4856]: I0126 17:01:09.209100 4856 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-6cghs container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.22:8443/livez\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 26 17:01:09 crc kubenswrapper[4856]: I0126 17:01:09.209156 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cghs" podUID="81c2f96b-55e0-483b-b72c-df7e156e9218" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.22:8443/livez\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 26 17:01:09 crc kubenswrapper[4856]: I0126 17:01:09.222745 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:09 crc kubenswrapper[4856]: E0126 17:01:09.223340 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:09.723309997 +0000 UTC m=+165.676563978 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:09 crc kubenswrapper[4856]: I0126 17:01:09.324417 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:09 crc kubenswrapper[4856]: E0126 17:01:09.325394 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:09.825364006 +0000 UTC m=+165.778617987 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:09 crc kubenswrapper[4856]: I0126 17:01:09.397293 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cghs" podStartSLOduration=134.397269197 podStartE2EDuration="2m14.397269197s" podCreationTimestamp="2026-01-26 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:01:09.115494517 +0000 UTC m=+165.068748518" watchObservedRunningTime="2026-01-26 17:01:09.397269197 +0000 UTC m=+165.350523178" Jan 26 17:01:09 crc kubenswrapper[4856]: I0126 17:01:09.429272 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:09 crc kubenswrapper[4856]: E0126 17:01:09.429981 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:09.929961401 +0000 UTC m=+165.883215382 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:09 crc kubenswrapper[4856]: I0126 17:01:09.450428 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7777fb866f-5bjl7_2ba3cf6a-a6be-4108-a155-c8bb530aa037/openshift-config-operator/0.log" Jan 26 17:01:09 crc kubenswrapper[4856]: I0126 17:01:09.452234 4856 generic.go:334] "Generic (PLEG): container finished" podID="2ba3cf6a-a6be-4108-a155-c8bb530aa037" containerID="7696af9bf5eb7a27c45bc9a500fea17921f66464546a3b193d1abfd56ccd50c4" exitCode=2 Jan 26 17:01:09 crc kubenswrapper[4856]: I0126 17:01:09.531005 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:09 crc kubenswrapper[4856]: E0126 17:01:09.534022 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:10.034002369 +0000 UTC m=+165.987256360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:09 crc kubenswrapper[4856]: I0126 17:01:09.566753 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:01:09 crc kubenswrapper[4856]: I0126 17:01:09.566790 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"02669c91c1c07af72d4562227d525afec05b92f59598d7c5e3d7bae5dd7ee11d"} Jan 26 17:01:09 crc kubenswrapper[4856]: I0126 17:01:09.566803 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5bjl7" event={"ID":"2ba3cf6a-a6be-4108-a155-c8bb530aa037","Type":"ContainerDied","Data":"7696af9bf5eb7a27c45bc9a500fea17921f66464546a3b193d1abfd56ccd50c4"} Jan 26 17:01:09 crc kubenswrapper[4856]: I0126 17:01:09.566815 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"56258114-1bee-4516-ab71-f60d15a9635d","Type":"ContainerStarted","Data":"02721e310b92647ac526a2662cb5ceecb1c039ddbbb08ec7de0c0bb07775f5b7"} Jan 26 17:01:09 crc kubenswrapper[4856]: I0126 17:01:09.567299 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" podStartSLOduration=136.56727413 podStartE2EDuration="2m16.56727413s" podCreationTimestamp="2026-01-26 16:58:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:01:09.399296546 +0000 UTC m=+165.352550547" watchObservedRunningTime="2026-01-26 17:01:09.56727413 +0000 UTC m=+165.520528111" Jan 26 17:01:09 crc kubenswrapper[4856]: I0126 17:01:09.636311 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:09 crc kubenswrapper[4856]: E0126 17:01:09.636793 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:10.136775939 +0000 UTC m=+166.090029920 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:09 crc kubenswrapper[4856]: I0126 17:01:09.742436 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:09 crc kubenswrapper[4856]: E0126 17:01:09.742856 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:10.242840387 +0000 UTC m=+166.196094378 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:09 crc kubenswrapper[4856]: I0126 17:01:09.855776 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:09 crc kubenswrapper[4856]: E0126 17:01:09.856178 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:10.356141738 +0000 UTC m=+166.309395719 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:09 crc kubenswrapper[4856]: I0126 17:01:09.957981 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:09 crc kubenswrapper[4856]: E0126 17:01:09.958377 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:10.458362942 +0000 UTC m=+166.411616923 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:10 crc kubenswrapper[4856]: I0126 17:01:10.268741 4856 patch_prober.go:28] interesting pod/router-default-5444994796-h9b2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:01:10 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Jan 26 17:01:10 crc kubenswrapper[4856]: [+]process-running ok Jan 26 17:01:10 crc kubenswrapper[4856]: healthz check failed Jan 26 17:01:10 crc kubenswrapper[4856]: I0126 17:01:10.268809 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9b2g" podUID="85f05bd5-ff83-4d29-9531-ab3499088095" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:01:10 crc kubenswrapper[4856]: I0126 17:01:10.269610 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:10 crc kubenswrapper[4856]: E0126 17:01:10.269953 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:10.76993658 +0000 UTC m=+166.723190561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:10 crc kubenswrapper[4856]: I0126 17:01:10.383781 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:10 crc kubenswrapper[4856]: E0126 17:01:10.384140 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:10.884127358 +0000 UTC m=+166.837381339 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:10 crc kubenswrapper[4856]: I0126 17:01:10.504120 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:10 crc kubenswrapper[4856]: E0126 17:01:10.504383 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:11.004367253 +0000 UTC m=+166.957621234 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:10 crc kubenswrapper[4856]: I0126 17:01:10.556999 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"fbce6e92307ee43d8be105237704a8ad799999f886a73ccdca4b10e58a820780"} Jan 26 17:01:10 crc kubenswrapper[4856]: I0126 17:01:10.630269 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:10 crc kubenswrapper[4856]: E0126 17:01:10.631121 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:11.131105511 +0000 UTC m=+167.084359492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:10 crc kubenswrapper[4856]: I0126 17:01:10.886274 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:10 crc kubenswrapper[4856]: E0126 17:01:10.886873 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:11.386853393 +0000 UTC m=+167.340107374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:10 crc kubenswrapper[4856]: I0126 17:01:10.993733 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:10 crc kubenswrapper[4856]: E0126 17:01:10.994124 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:11.494107246 +0000 UTC m=+167.447361237 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:11 crc kubenswrapper[4856]: I0126 17:01:11.556909 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:11 crc kubenswrapper[4856]: I0126 17:01:11.557399 4856 patch_prober.go:28] interesting pod/router-default-5444994796-h9b2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:01:11 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Jan 26 17:01:11 crc kubenswrapper[4856]: [+]process-running ok Jan 26 17:01:11 crc kubenswrapper[4856]: healthz check failed Jan 26 17:01:11 crc kubenswrapper[4856]: I0126 17:01:11.557479 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9b2g" podUID="85f05bd5-ff83-4d29-9531-ab3499088095" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:01:11 crc kubenswrapper[4856]: I0126 17:01:11.558376 4856 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-5bjl7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 26 17:01:11 crc kubenswrapper[4856]: I0126 17:01:11.558452 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5bjl7" podUID="2ba3cf6a-a6be-4108-a155-c8bb530aa037" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 26 17:01:11 crc kubenswrapper[4856]: E0126 17:01:11.558644 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:12.558621673 +0000 UTC m=+168.511875674 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:11 crc kubenswrapper[4856]: I0126 17:01:11.917744 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:12 crc kubenswrapper[4856]: E0126 17:01:11.918509 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:12.418484935 +0000 UTC m=+168.371738916 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:12 crc kubenswrapper[4856]: I0126 17:01:11.923414 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:12 crc kubenswrapper[4856]: E0126 17:01:11.941073 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:12.441051021 +0000 UTC m=+168.394305002 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:12 crc kubenswrapper[4856]: I0126 17:01:12.069100 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:12 crc kubenswrapper[4856]: I0126 17:01:12.069682 4856 patch_prober.go:28] interesting pod/router-default-5444994796-h9b2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:01:12 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Jan 26 17:01:12 crc kubenswrapper[4856]: [+]process-running ok Jan 26 17:01:12 crc kubenswrapper[4856]: healthz check failed Jan 26 17:01:12 crc kubenswrapper[4856]: I0126 17:01:12.069733 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9b2g" podUID="85f05bd5-ff83-4d29-9531-ab3499088095" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:01:12 crc kubenswrapper[4856]: E0126 17:01:12.069983 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:12.569955322 +0000 UTC m=+168.523209313 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:12 crc kubenswrapper[4856]: I0126 17:01:12.223294 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:12 crc kubenswrapper[4856]: E0126 17:01:12.223653 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:12.723641374 +0000 UTC m=+168.676895355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:12 crc kubenswrapper[4856]: I0126 17:01:12.350036 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:12 crc kubenswrapper[4856]: E0126 17:01:12.350541 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:12.850505865 +0000 UTC m=+168.803759846 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:12 crc kubenswrapper[4856]: I0126 17:01:12.505130 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:12 crc kubenswrapper[4856]: E0126 17:01:12.505430 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:13.005418344 +0000 UTC m=+168.958672325 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:12 crc kubenswrapper[4856]: I0126 17:01:12.816806 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:12 crc kubenswrapper[4856]: E0126 17:01:12.817418 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:13.317402324 +0000 UTC m=+169.270656305 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:13 crc kubenswrapper[4856]: I0126 17:01:12.846032 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7777fb866f-5bjl7_2ba3cf6a-a6be-4108-a155-c8bb530aa037/openshift-config-operator/0.log" Jan 26 17:01:13 crc kubenswrapper[4856]: I0126 17:01:12.850250 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5bjl7" event={"ID":"2ba3cf6a-a6be-4108-a155-c8bb530aa037","Type":"ContainerStarted","Data":"6f3704a0f6342f24993fae1a0b1a22eff40cbcbc5b7f46e53e9ada0cdde1aa55"} Jan 26 17:01:13 crc kubenswrapper[4856]: I0126 17:01:12.851395 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5bjl7" Jan 26 17:01:13 crc kubenswrapper[4856]: I0126 17:01:12.920650 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:13 crc kubenswrapper[4856]: E0126 17:01:12.921189 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:13.421165724 +0000 UTC m=+169.374419705 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:13 crc kubenswrapper[4856]: I0126 17:01:13.092722 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:13 crc kubenswrapper[4856]: I0126 17:01:13.093056 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"56258114-1bee-4516-ab71-f60d15a9635d","Type":"ContainerStarted","Data":"688d67199a7c309449abcb4f65cea024a314835283d336314d00644e67f44daf"} Jan 26 17:01:13 crc kubenswrapper[4856]: E0126 17:01:13.093161 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:13.593123485 +0000 UTC m=+169.546377466 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:13 crc kubenswrapper[4856]: I0126 17:01:13.110599 4856 patch_prober.go:28] interesting pod/router-default-5444994796-h9b2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:01:13 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Jan 26 17:01:13 crc kubenswrapper[4856]: [+]process-running ok Jan 26 17:01:13 crc kubenswrapper[4856]: healthz check failed Jan 26 17:01:13 crc kubenswrapper[4856]: I0126 17:01:13.110686 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9b2g" podUID="85f05bd5-ff83-4d29-9531-ab3499088095" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:01:13 crc kubenswrapper[4856]: I0126 17:01:13.635164 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:13 crc kubenswrapper[4856]: E0126 17:01:13.635921 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:14.63589754 +0000 UTC m=+170.589151521 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:13 crc kubenswrapper[4856]: I0126 17:01:13.655256 4856 patch_prober.go:28] interesting pod/console-f9d7485db-6qgnn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.26:8443/health\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 26 17:01:13 crc kubenswrapper[4856]: I0126 17:01:13.655321 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-6qgnn" podUID="b28404ed-2e71-4b3f-9140-35ee89dbc8f2" containerName="console" probeResult="failure" output="Get \"https://10.217.0.26:8443/health\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 26 17:01:14 crc kubenswrapper[4856]: I0126 17:01:14.131374 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:14 crc kubenswrapper[4856]: E0126 17:01:14.131729 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:14.631717461 +0000 UTC m=+170.584971442 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:14 crc kubenswrapper[4856]: I0126 17:01:14.161858 4856 patch_prober.go:28] interesting pod/router-default-5444994796-h9b2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:01:14 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Jan 26 17:01:14 crc kubenswrapper[4856]: [+]process-running ok Jan 26 17:01:14 crc kubenswrapper[4856]: healthz check failed Jan 26 17:01:14 crc kubenswrapper[4856]: I0126 17:01:14.162212 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9b2g" podUID="85f05bd5-ff83-4d29-9531-ab3499088095" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:01:14 crc kubenswrapper[4856]: I0126 17:01:14.232300 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:14 crc kubenswrapper[4856]: E0126 17:01:14.232817 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:14.732793412 +0000 UTC m=+170.686047393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:14 crc kubenswrapper[4856]: I0126 17:01:14.265059 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-vfm8t" event={"ID":"c8657575-cd22-4ebc-ae9d-4174366985d3","Type":"ContainerStarted","Data":"d0091273fe8514a269a2651f8f1656ec323dcdb4398bc05ab2b74d331928cdf7"} Jan 26 17:01:14 crc kubenswrapper[4856]: I0126 17:01:14.335148 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:14 crc kubenswrapper[4856]: E0126 17:01:14.337495 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:14.837474759 +0000 UTC m=+170.790728730 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:14 crc kubenswrapper[4856]: I0126 17:01:14.461205 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:14 crc kubenswrapper[4856]: E0126 17:01:14.461828 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:14.961803856 +0000 UTC m=+170.915057837 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:14 crc kubenswrapper[4856]: I0126 17:01:14.480753 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-7l927 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 26 17:01:14 crc kubenswrapper[4856]: I0126 17:01:14.480839 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-7l927" podUID="94291fa4-24a5-499e-8143-89c8784d9284" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 26 17:01:14 crc kubenswrapper[4856]: I0126 17:01:14.482484 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-7l927 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 26 17:01:14 crc kubenswrapper[4856]: I0126 17:01:14.482570 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7l927" podUID="94291fa4-24a5-499e-8143-89c8784d9284" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 26 17:01:14 crc kubenswrapper[4856]: I0126 17:01:14.482814 4856 patch_prober.go:28] interesting pod/console-operator-58897d9998-4pbj2 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 26 17:01:14 crc kubenswrapper[4856]: I0126 17:01:14.482844 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-4pbj2" podUID="831dc87e-8e14-43d3-a36e-dc7679041ae5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 26 17:01:14 crc kubenswrapper[4856]: I0126 17:01:14.482850 4856 patch_prober.go:28] interesting pod/console-operator-58897d9998-4pbj2 container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 26 17:01:14 crc kubenswrapper[4856]: I0126 17:01:14.482915 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-4pbj2" podUID="831dc87e-8e14-43d3-a36e-dc7679041ae5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 26 17:01:14 crc kubenswrapper[4856]: I0126 17:01:14.490196 4856 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-wvttb container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Jan 26 17:01:14 crc kubenswrapper[4856]: I0126 17:01:14.490263 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-wvttb" podUID="2d37efbf-d18f-486b-9b43-bc4d181af4ca" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Jan 26 17:01:14 crc kubenswrapper[4856]: I0126 17:01:14.490349 4856 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-wvttb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Jan 26 17:01:14 crc kubenswrapper[4856]: I0126 17:01:14.490366 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-wvttb" podUID="2d37efbf-d18f-486b-9b43-bc4d181af4ca" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Jan 26 17:01:14 crc kubenswrapper[4856]: I0126 17:01:14.494381 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k662z" Jan 26 17:01:14 crc kubenswrapper[4856]: I0126 17:01:14.564602 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:14 crc kubenswrapper[4856]: E0126 17:01:14.567989 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:15.067966646 +0000 UTC m=+171.021220627 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:14 crc kubenswrapper[4856]: I0126 17:01:14.755050 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mr7cp" Jan 26 17:01:14 crc kubenswrapper[4856]: I0126 17:01:14.755216 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:14 crc kubenswrapper[4856]: E0126 17:01:14.755946 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:15.255929099 +0000 UTC m=+171.209183090 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:14 crc kubenswrapper[4856]: I0126 17:01:14.759502 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-lndnt" Jan 26 17:01:14 crc kubenswrapper[4856]: I0126 17:01:14.854205 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nn46h" Jan 26 17:01:14 crc kubenswrapper[4856]: I0126 17:01:14.856811 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:14 crc kubenswrapper[4856]: E0126 17:01:14.858302 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:15.358283297 +0000 UTC m=+171.311537368 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:15 crc kubenswrapper[4856]: I0126 17:01:15.031044 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:15 crc kubenswrapper[4856]: E0126 17:01:15.031439 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:15.531410733 +0000 UTC m=+171.484664714 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:15 crc kubenswrapper[4856]: I0126 17:01:15.057244 4856 patch_prober.go:28] interesting pod/router-default-5444994796-h9b2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:01:15 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Jan 26 17:01:15 crc kubenswrapper[4856]: [+]process-running ok Jan 26 17:01:15 crc kubenswrapper[4856]: healthz check failed Jan 26 17:01:15 crc kubenswrapper[4856]: I0126 17:01:15.057298 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9b2g" podUID="85f05bd5-ff83-4d29-9531-ab3499088095" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:01:15 crc kubenswrapper[4856]: I0126 17:01:15.066229 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 17:01:15 crc kubenswrapper[4856]: I0126 17:01:15.194028 4856 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-fpqvc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:01:15 crc kubenswrapper[4856]: I0126 17:01:15.194111 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fpqvc" podUID="5fe6baed-ab97-4d8a-8be2-6f00f9698136" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:01:15 crc kubenswrapper[4856]: I0126 17:01:15.194230 4856 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-cb8nk container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:01:15 crc kubenswrapper[4856]: I0126 17:01:15.194299 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" podUID="69008ed1-f3e5-400d-852f-adbcd94199f6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 17:01:15 crc kubenswrapper[4856]: I0126 17:01:15.194678 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:15 crc kubenswrapper[4856]: E0126 17:01:15.195036 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:15.695020428 +0000 UTC m=+171.648274409 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:15 crc kubenswrapper[4856]: I0126 17:01:15.425592 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:15 crc kubenswrapper[4856]: E0126 17:01:15.426604 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:15.926580766 +0000 UTC m=+171.879834747 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:15 crc kubenswrapper[4856]: I0126 17:01:15.443692 4856 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-5bjl7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 26 17:01:15 crc kubenswrapper[4856]: I0126 17:01:15.443767 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5bjl7" podUID="2ba3cf6a-a6be-4108-a155-c8bb530aa037" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 26 17:01:15 crc kubenswrapper[4856]: I0126 17:01:15.527558 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:15 crc kubenswrapper[4856]: E0126 17:01:15.528463 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:16.02844791 +0000 UTC m=+171.981701891 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:15 crc kubenswrapper[4856]: I0126 17:01:15.677839 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:15 crc kubenswrapper[4856]: E0126 17:01:15.678444 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:16.178411173 +0000 UTC m=+172.131665154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:15 crc kubenswrapper[4856]: I0126 17:01:15.688970 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:15 crc kubenswrapper[4856]: E0126 17:01:15.689985 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:16.189949763 +0000 UTC m=+172.143203744 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:15 crc kubenswrapper[4856]: I0126 17:01:15.795281 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:15 crc kubenswrapper[4856]: E0126 17:01:15.796572 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:16.296516226 +0000 UTC m=+172.249770217 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:15 crc kubenswrapper[4856]: I0126 17:01:15.982000 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:15 crc kubenswrapper[4856]: E0126 17:01:15.982435 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:16.482418728 +0000 UTC m=+172.435672709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:16 crc kubenswrapper[4856]: I0126 17:01:16.008844 4856 csr.go:261] certificate signing request csr-sr8zt is approved, waiting to be issued Jan 26 17:01:16 crc kubenswrapper[4856]: I0126 17:01:16.008873 4856 csr.go:257] certificate signing request csr-sr8zt is issued Jan 26 17:01:16 crc kubenswrapper[4856]: I0126 17:01:16.390089 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:16 crc kubenswrapper[4856]: E0126 17:01:16.390772 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:16.890738479 +0000 UTC m=+172.843992460 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:16 crc kubenswrapper[4856]: I0126 17:01:16.408594 4856 patch_prober.go:28] interesting pod/router-default-5444994796-h9b2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:01:16 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Jan 26 17:01:16 crc kubenswrapper[4856]: [+]process-running ok Jan 26 17:01:16 crc kubenswrapper[4856]: healthz check failed Jan 26 17:01:16 crc kubenswrapper[4856]: I0126 17:01:16.408669 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9b2g" podUID="85f05bd5-ff83-4d29-9531-ab3499088095" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:01:16 crc kubenswrapper[4856]: I0126 17:01:16.527309 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:16 crc kubenswrapper[4856]: E0126 17:01:16.527918 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:17.027890633 +0000 UTC m=+172.981144614 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:16 crc kubenswrapper[4856]: I0126 17:01:16.705905 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:16 crc kubenswrapper[4856]: E0126 17:01:16.706823 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:17.206804389 +0000 UTC m=+173.160058360 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:16 crc kubenswrapper[4856]: I0126 17:01:16.750025 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-vfm8t" event={"ID":"c8657575-cd22-4ebc-ae9d-4174366985d3","Type":"ContainerStarted","Data":"aaf3beaf7acc57a58fac4f6add0ba1742c11399c8e5ad40f941c85d9c4ee334e"} Jan 26 17:01:16 crc kubenswrapper[4856]: I0126 17:01:16.776887 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d8c22047-144c-402a-80c5-c206539b6826","Type":"ContainerStarted","Data":"eb80e342e945ab720a056481e6d379636786d87ebb81ecfa7bcd84ffb36388ff"} Jan 26 17:01:16 crc kubenswrapper[4856]: I0126 17:01:16.817395 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:16 crc kubenswrapper[4856]: E0126 17:01:16.817798 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:17.317783702 +0000 UTC m=+173.271037683 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:16 crc kubenswrapper[4856]: I0126 17:01:16.863770 4856 generic.go:334] "Generic (PLEG): container finished" podID="56258114-1bee-4516-ab71-f60d15a9635d" containerID="688d67199a7c309449abcb4f65cea024a314835283d336314d00644e67f44daf" exitCode=0 Jan 26 17:01:16 crc kubenswrapper[4856]: I0126 17:01:16.863848 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"56258114-1bee-4516-ab71-f60d15a9635d","Type":"ContainerDied","Data":"688d67199a7c309449abcb4f65cea024a314835283d336314d00644e67f44daf"} Jan 26 17:01:16 crc kubenswrapper[4856]: I0126 17:01:16.920044 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:16 crc kubenswrapper[4856]: E0126 17:01:16.920364 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:17.420345337 +0000 UTC m=+173.373599318 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:16 crc kubenswrapper[4856]: I0126 17:01:16.997468 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=14.997446829 podStartE2EDuration="14.997446829s" podCreationTimestamp="2026-01-26 17:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:01:16.943435307 +0000 UTC m=+172.896689288" watchObservedRunningTime="2026-01-26 17:01:16.997446829 +0000 UTC m=+172.950700810" Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.009693 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-26 16:56:15 +0000 UTC, rotation deadline is 2026-10-11 11:23:10.578419877 +0000 UTC Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.009755 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6186h21m53.568668175s for next certificate rotation Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.025201 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:17 crc kubenswrapper[4856]: E0126 17:01:17.025567 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:17.525555108 +0000 UTC m=+173.478809079 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.152022 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:17 crc kubenswrapper[4856]: E0126 17:01:17.152269 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:17.652237034 +0000 UTC m=+173.605491015 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.152324 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:17 crc kubenswrapper[4856]: E0126 17:01:17.152725 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:17.652715968 +0000 UTC m=+173.605969949 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.244082 4856 patch_prober.go:28] interesting pod/router-default-5444994796-h9b2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:01:17 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Jan 26 17:01:17 crc kubenswrapper[4856]: [+]process-running ok Jan 26 17:01:17 crc kubenswrapper[4856]: healthz check failed Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.244468 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9b2g" podUID="85f05bd5-ff83-4d29-9531-ab3499088095" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.253621 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:17 crc kubenswrapper[4856]: E0126 17:01:17.253771 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:17.753748148 +0000 UTC m=+173.707002129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.253854 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:17 crc kubenswrapper[4856]: E0126 17:01:17.254217 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:17.754207871 +0000 UTC m=+173.707461922 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.264181 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cghs" Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.292780 4856 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-6cghs container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 26 17:01:17 crc kubenswrapper[4856]: [+]log ok Jan 26 17:01:17 crc kubenswrapper[4856]: [+]etcd ok Jan 26 17:01:17 crc kubenswrapper[4856]: [+]etcd-readiness ok Jan 26 17:01:17 crc kubenswrapper[4856]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 26 17:01:17 crc kubenswrapper[4856]: [-]informer-sync failed: reason withheld Jan 26 17:01:17 crc kubenswrapper[4856]: [+]poststarthook/generic-apiserver-start-informers ok Jan 26 17:01:17 crc kubenswrapper[4856]: [+]poststarthook/max-in-flight-filter ok Jan 26 17:01:17 crc kubenswrapper[4856]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 26 17:01:17 crc kubenswrapper[4856]: [+]poststarthook/openshift.io-StartUserInformer ok Jan 26 17:01:17 crc kubenswrapper[4856]: [+]poststarthook/openshift.io-StartOAuthInformer ok Jan 26 17:01:17 crc kubenswrapper[4856]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Jan 26 17:01:17 crc kubenswrapper[4856]: [+]shutdown ok Jan 26 17:01:17 crc kubenswrapper[4856]: readyz check failed Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.292844 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cghs" podUID="81c2f96b-55e0-483b-b72c-df7e156e9218" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.344348 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5bjl7" Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.354705 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:17 crc kubenswrapper[4856]: E0126 17:01:17.354857 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:17.854834899 +0000 UTC m=+173.808088900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.355026 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:17 crc kubenswrapper[4856]: E0126 17:01:17.356065 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:17.856043164 +0000 UTC m=+173.809297235 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.455762 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:17 crc kubenswrapper[4856]: E0126 17:01:17.455909 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:17.955886389 +0000 UTC m=+173.909140370 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.456005 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:17 crc kubenswrapper[4856]: E0126 17:01:17.456289 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:17.95627707 +0000 UTC m=+173.909531051 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.498337 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-txmdl"] Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.499764 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-txmdl" Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.509232 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.529285 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-txmdl"] Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.557193 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:17 crc kubenswrapper[4856]: E0126 17:01:17.557305 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:18.057286409 +0000 UTC m=+174.010540380 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.557602 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40a27476-22b1-4083-990e-66e70ccdaf4c-utilities\") pod \"certified-operators-txmdl\" (UID: \"40a27476-22b1-4083-990e-66e70ccdaf4c\") " pod="openshift-marketplace/certified-operators-txmdl" Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.557788 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf49w\" (UniqueName: \"kubernetes.io/projected/40a27476-22b1-4083-990e-66e70ccdaf4c-kube-api-access-tf49w\") pod \"certified-operators-txmdl\" (UID: \"40a27476-22b1-4083-990e-66e70ccdaf4c\") " pod="openshift-marketplace/certified-operators-txmdl" Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.557872 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40a27476-22b1-4083-990e-66e70ccdaf4c-catalog-content\") pod \"certified-operators-txmdl\" (UID: \"40a27476-22b1-4083-990e-66e70ccdaf4c\") " pod="openshift-marketplace/certified-operators-txmdl" Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.557962 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:17 crc kubenswrapper[4856]: E0126 17:01:17.558371 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:18.05835126 +0000 UTC m=+174.011605241 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.659188 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.659418 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40a27476-22b1-4083-990e-66e70ccdaf4c-utilities\") pod \"certified-operators-txmdl\" (UID: \"40a27476-22b1-4083-990e-66e70ccdaf4c\") " pod="openshift-marketplace/certified-operators-txmdl" Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.659472 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf49w\" (UniqueName: \"kubernetes.io/projected/40a27476-22b1-4083-990e-66e70ccdaf4c-kube-api-access-tf49w\") pod \"certified-operators-txmdl\" (UID: \"40a27476-22b1-4083-990e-66e70ccdaf4c\") " pod="openshift-marketplace/certified-operators-txmdl" Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.659505 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40a27476-22b1-4083-990e-66e70ccdaf4c-catalog-content\") pod \"certified-operators-txmdl\" (UID: \"40a27476-22b1-4083-990e-66e70ccdaf4c\") " pod="openshift-marketplace/certified-operators-txmdl" Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.660015 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40a27476-22b1-4083-990e-66e70ccdaf4c-catalog-content\") pod \"certified-operators-txmdl\" (UID: \"40a27476-22b1-4083-990e-66e70ccdaf4c\") " pod="openshift-marketplace/certified-operators-txmdl" Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.660213 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40a27476-22b1-4083-990e-66e70ccdaf4c-utilities\") pod \"certified-operators-txmdl\" (UID: \"40a27476-22b1-4083-990e-66e70ccdaf4c\") " pod="openshift-marketplace/certified-operators-txmdl" Jan 26 17:01:17 crc kubenswrapper[4856]: E0126 17:01:17.660391 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:18.160369809 +0000 UTC m=+174.113623790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.661200 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n8hp2"] Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.663921 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n8hp2" Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.674959 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.700513 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n8hp2"] Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.708931 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf49w\" (UniqueName: \"kubernetes.io/projected/40a27476-22b1-4083-990e-66e70ccdaf4c-kube-api-access-tf49w\") pod \"certified-operators-txmdl\" (UID: \"40a27476-22b1-4083-990e-66e70ccdaf4c\") " pod="openshift-marketplace/certified-operators-txmdl" Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.760661 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6086d4b-faeb-4a12-8e6a-2a178dfe374c-utilities\") pod \"community-operators-n8hp2\" (UID: \"a6086d4b-faeb-4a12-8e6a-2a178dfe374c\") " pod="openshift-marketplace/community-operators-n8hp2" Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.760792 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7mvx\" (UniqueName: \"kubernetes.io/projected/a6086d4b-faeb-4a12-8e6a-2a178dfe374c-kube-api-access-x7mvx\") pod \"community-operators-n8hp2\" (UID: \"a6086d4b-faeb-4a12-8e6a-2a178dfe374c\") " pod="openshift-marketplace/community-operators-n8hp2" Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.760833 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.760862 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6086d4b-faeb-4a12-8e6a-2a178dfe374c-catalog-content\") pod \"community-operators-n8hp2\" (UID: \"a6086d4b-faeb-4a12-8e6a-2a178dfe374c\") " pod="openshift-marketplace/community-operators-n8hp2" Jan 26 17:01:17 crc kubenswrapper[4856]: E0126 17:01:17.761232 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:18.261216933 +0000 UTC m=+174.214470914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.815496 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-txmdl" Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.823579 4856 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.847181 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-62nhd"] Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.848170 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-62nhd" Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.861424 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.861576 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6086d4b-faeb-4a12-8e6a-2a178dfe374c-catalog-content\") pod \"community-operators-n8hp2\" (UID: \"a6086d4b-faeb-4a12-8e6a-2a178dfe374c\") " pod="openshift-marketplace/community-operators-n8hp2" Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.861618 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6086d4b-faeb-4a12-8e6a-2a178dfe374c-utilities\") pod \"community-operators-n8hp2\" (UID: \"a6086d4b-faeb-4a12-8e6a-2a178dfe374c\") " pod="openshift-marketplace/community-operators-n8hp2" Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.861661 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s92lp\" (UniqueName: \"kubernetes.io/projected/7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766-kube-api-access-s92lp\") pod \"certified-operators-62nhd\" (UID: \"7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766\") " pod="openshift-marketplace/certified-operators-62nhd" Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.861679 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766-catalog-content\") pod \"certified-operators-62nhd\" (UID: \"7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766\") " pod="openshift-marketplace/certified-operators-62nhd" Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.861709 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766-utilities\") pod \"certified-operators-62nhd\" (UID: \"7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766\") " pod="openshift-marketplace/certified-operators-62nhd" Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.861732 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7mvx\" (UniqueName: \"kubernetes.io/projected/a6086d4b-faeb-4a12-8e6a-2a178dfe374c-kube-api-access-x7mvx\") pod \"community-operators-n8hp2\" (UID: \"a6086d4b-faeb-4a12-8e6a-2a178dfe374c\") " pod="openshift-marketplace/community-operators-n8hp2" Jan 26 17:01:17 crc kubenswrapper[4856]: E0126 17:01:17.862144 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:18.362125188 +0000 UTC m=+174.315379169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.862165 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6086d4b-faeb-4a12-8e6a-2a178dfe374c-utilities\") pod \"community-operators-n8hp2\" (UID: \"a6086d4b-faeb-4a12-8e6a-2a178dfe374c\") " pod="openshift-marketplace/community-operators-n8hp2" Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.862502 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6086d4b-faeb-4a12-8e6a-2a178dfe374c-catalog-content\") pod \"community-operators-n8hp2\" (UID: \"a6086d4b-faeb-4a12-8e6a-2a178dfe374c\") " pod="openshift-marketplace/community-operators-n8hp2" Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.878981 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d8c22047-144c-402a-80c5-c206539b6826","Type":"ContainerStarted","Data":"1ebaf120a54fd03aee32564585380d095168008ce25b9456bb3082fadde275b6"} Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.881484 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-62nhd"] Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.932743 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=11.93272413 podStartE2EDuration="11.93272413s" podCreationTimestamp="2026-01-26 17:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:01:17.929892207 +0000 UTC m=+173.883146198" watchObservedRunningTime="2026-01-26 17:01:17.93272413 +0000 UTC m=+173.885978111" Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.939614 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7mvx\" (UniqueName: \"kubernetes.io/projected/a6086d4b-faeb-4a12-8e6a-2a178dfe374c-kube-api-access-x7mvx\") pod \"community-operators-n8hp2\" (UID: \"a6086d4b-faeb-4a12-8e6a-2a178dfe374c\") " pod="openshift-marketplace/community-operators-n8hp2" Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.950629 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-vfm8t" event={"ID":"c8657575-cd22-4ebc-ae9d-4174366985d3","Type":"ContainerStarted","Data":"3af0c7fd8b578466852ece6c83fad0a731244f1ccf3b85c06b6618e8ba6a4cad"} Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.965291 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s92lp\" (UniqueName: \"kubernetes.io/projected/7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766-kube-api-access-s92lp\") pod \"certified-operators-62nhd\" (UID: \"7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766\") " pod="openshift-marketplace/certified-operators-62nhd" Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.965337 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766-catalog-content\") pod \"certified-operators-62nhd\" (UID: \"7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766\") " pod="openshift-marketplace/certified-operators-62nhd" Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.965377 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766-utilities\") pod \"certified-operators-62nhd\" (UID: \"7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766\") " pod="openshift-marketplace/certified-operators-62nhd" Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.965415 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.966101 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766-catalog-content\") pod \"certified-operators-62nhd\" (UID: \"7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766\") " pod="openshift-marketplace/certified-operators-62nhd" Jan 26 17:01:17 crc kubenswrapper[4856]: E0126 17:01:17.966225 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:18.466212038 +0000 UTC m=+174.419466019 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.966552 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766-utilities\") pod \"certified-operators-62nhd\" (UID: \"7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766\") " pod="openshift-marketplace/certified-operators-62nhd" Jan 26 17:01:17 crc kubenswrapper[4856]: I0126 17:01:17.983519 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n8hp2" Jan 26 17:01:18 crc kubenswrapper[4856]: I0126 17:01:18.011294 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s92lp\" (UniqueName: \"kubernetes.io/projected/7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766-kube-api-access-s92lp\") pod \"certified-operators-62nhd\" (UID: \"7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766\") " pod="openshift-marketplace/certified-operators-62nhd" Jan 26 17:01:18 crc kubenswrapper[4856]: I0126 17:01:18.060847 4856 patch_prober.go:28] interesting pod/router-default-5444994796-h9b2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:01:18 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Jan 26 17:01:18 crc kubenswrapper[4856]: [+]process-running ok Jan 26 17:01:18 crc kubenswrapper[4856]: healthz check failed Jan 26 17:01:18 crc kubenswrapper[4856]: I0126 17:01:18.060920 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9b2g" podUID="85f05bd5-ff83-4d29-9531-ab3499088095" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:01:18 crc kubenswrapper[4856]: I0126 17:01:18.073135 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:18 crc kubenswrapper[4856]: E0126 17:01:18.073674 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:18.573640556 +0000 UTC m=+174.526894537 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:18 crc kubenswrapper[4856]: I0126 17:01:18.080262 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-vfm8t" podStartSLOduration=27.08024018 podStartE2EDuration="27.08024018s" podCreationTimestamp="2026-01-26 17:00:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:01:18.009902766 +0000 UTC m=+173.963156757" watchObservedRunningTime="2026-01-26 17:01:18.08024018 +0000 UTC m=+174.033494161" Jan 26 17:01:18 crc kubenswrapper[4856]: I0126 17:01:18.080470 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qgjjd"] Jan 26 17:01:18 crc kubenswrapper[4856]: I0126 17:01:18.081720 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qgjjd" Jan 26 17:01:18 crc kubenswrapper[4856]: I0126 17:01:18.115201 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qgjjd"] Jan 26 17:01:18 crc kubenswrapper[4856]: I0126 17:01:18.175240 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89cf05de-642b-4574-9f79-45e7a3d4afa3-utilities\") pod \"community-operators-qgjjd\" (UID: \"89cf05de-642b-4574-9f79-45e7a3d4afa3\") " pod="openshift-marketplace/community-operators-qgjjd" Jan 26 17:01:18 crc kubenswrapper[4856]: I0126 17:01:18.175296 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:18 crc kubenswrapper[4856]: I0126 17:01:18.175361 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89cf05de-642b-4574-9f79-45e7a3d4afa3-catalog-content\") pod \"community-operators-qgjjd\" (UID: \"89cf05de-642b-4574-9f79-45e7a3d4afa3\") " pod="openshift-marketplace/community-operators-qgjjd" Jan 26 17:01:18 crc kubenswrapper[4856]: I0126 17:01:18.175419 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtxjz\" (UniqueName: \"kubernetes.io/projected/89cf05de-642b-4574-9f79-45e7a3d4afa3-kube-api-access-gtxjz\") pod \"community-operators-qgjjd\" (UID: \"89cf05de-642b-4574-9f79-45e7a3d4afa3\") " pod="openshift-marketplace/community-operators-qgjjd" Jan 26 17:01:18 crc kubenswrapper[4856]: E0126 17:01:18.175748 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:18.675726496 +0000 UTC m=+174.628980537 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:18 crc kubenswrapper[4856]: I0126 17:01:18.191851 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-62nhd" Jan 26 17:01:18 crc kubenswrapper[4856]: I0126 17:01:18.280184 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:18 crc kubenswrapper[4856]: I0126 17:01:18.280389 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89cf05de-642b-4574-9f79-45e7a3d4afa3-catalog-content\") pod \"community-operators-qgjjd\" (UID: \"89cf05de-642b-4574-9f79-45e7a3d4afa3\") " pod="openshift-marketplace/community-operators-qgjjd" Jan 26 17:01:18 crc kubenswrapper[4856]: I0126 17:01:18.280438 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtxjz\" (UniqueName: \"kubernetes.io/projected/89cf05de-642b-4574-9f79-45e7a3d4afa3-kube-api-access-gtxjz\") pod \"community-operators-qgjjd\" (UID: \"89cf05de-642b-4574-9f79-45e7a3d4afa3\") " pod="openshift-marketplace/community-operators-qgjjd" Jan 26 17:01:18 crc kubenswrapper[4856]: I0126 17:01:18.280472 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89cf05de-642b-4574-9f79-45e7a3d4afa3-utilities\") pod \"community-operators-qgjjd\" (UID: \"89cf05de-642b-4574-9f79-45e7a3d4afa3\") " pod="openshift-marketplace/community-operators-qgjjd" Jan 26 17:01:18 crc kubenswrapper[4856]: I0126 17:01:18.280991 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89cf05de-642b-4574-9f79-45e7a3d4afa3-utilities\") pod \"community-operators-qgjjd\" (UID: \"89cf05de-642b-4574-9f79-45e7a3d4afa3\") " pod="openshift-marketplace/community-operators-qgjjd" Jan 26 17:01:18 crc kubenswrapper[4856]: E0126 17:01:18.281365 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:18.781348741 +0000 UTC m=+174.734602722 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:18 crc kubenswrapper[4856]: I0126 17:01:18.283162 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89cf05de-642b-4574-9f79-45e7a3d4afa3-catalog-content\") pod \"community-operators-qgjjd\" (UID: \"89cf05de-642b-4574-9f79-45e7a3d4afa3\") " pod="openshift-marketplace/community-operators-qgjjd" Jan 26 17:01:18 crc kubenswrapper[4856]: I0126 17:01:18.334699 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtxjz\" (UniqueName: \"kubernetes.io/projected/89cf05de-642b-4574-9f79-45e7a3d4afa3-kube-api-access-gtxjz\") pod \"community-operators-qgjjd\" (UID: \"89cf05de-642b-4574-9f79-45e7a3d4afa3\") " pod="openshift-marketplace/community-operators-qgjjd" Jan 26 17:01:18 crc kubenswrapper[4856]: I0126 17:01:18.381323 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12e50462-28e6-4531-ada4-e652310e6cce-metrics-certs\") pod \"network-metrics-daemon-295wr\" (UID: \"12e50462-28e6-4531-ada4-e652310e6cce\") " pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 17:01:18 crc kubenswrapper[4856]: I0126 17:01:18.381430 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:18 crc kubenswrapper[4856]: E0126 17:01:18.381748 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:18.881735661 +0000 UTC m=+174.834989632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:18 crc kubenswrapper[4856]: I0126 17:01:18.388148 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12e50462-28e6-4531-ada4-e652310e6cce-metrics-certs\") pod \"network-metrics-daemon-295wr\" (UID: \"12e50462-28e6-4531-ada4-e652310e6cce\") " pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 17:01:18 crc kubenswrapper[4856]: I0126 17:01:18.407126 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qgjjd" Jan 26 17:01:18 crc kubenswrapper[4856]: I0126 17:01:18.485225 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:18 crc kubenswrapper[4856]: E0126 17:01:18.485566 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:18.985547033 +0000 UTC m=+174.938801014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:18 crc kubenswrapper[4856]: I0126 17:01:18.540635 4856 patch_prober.go:28] interesting pod/apiserver-76f77b778f-6rlxp container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 26 17:01:18 crc kubenswrapper[4856]: [+]log ok Jan 26 17:01:18 crc kubenswrapper[4856]: [+]etcd ok Jan 26 17:01:18 crc kubenswrapper[4856]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 26 17:01:18 crc kubenswrapper[4856]: [+]poststarthook/generic-apiserver-start-informers ok Jan 26 17:01:18 crc kubenswrapper[4856]: [+]poststarthook/max-in-flight-filter ok Jan 26 17:01:18 crc kubenswrapper[4856]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 26 17:01:18 crc kubenswrapper[4856]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 26 17:01:18 crc kubenswrapper[4856]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 26 17:01:18 crc kubenswrapper[4856]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 26 17:01:18 crc kubenswrapper[4856]: [+]poststarthook/project.openshift.io-projectcache ok Jan 26 17:01:18 crc kubenswrapper[4856]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 26 17:01:18 crc kubenswrapper[4856]: [+]poststarthook/openshift.io-startinformers ok Jan 26 17:01:18 crc kubenswrapper[4856]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 26 17:01:18 crc kubenswrapper[4856]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 26 17:01:18 crc kubenswrapper[4856]: livez check failed Jan 26 17:01:18 crc kubenswrapper[4856]: I0126 17:01:18.540706 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" podUID="a6d331bd-2db3-4319-9f5c-db56d408d9e3" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:01:18 crc kubenswrapper[4856]: I0126 17:01:18.542837 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-295wr" Jan 26 17:01:18 crc kubenswrapper[4856]: I0126 17:01:18.586505 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:18 crc kubenswrapper[4856]: E0126 17:01:18.586966 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:19.086950213 +0000 UTC m=+175.040204194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:18 crc kubenswrapper[4856]: I0126 17:01:18.690654 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:18 crc kubenswrapper[4856]: E0126 17:01:18.690956 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:19.19093661 +0000 UTC m=+175.144190591 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:18 crc kubenswrapper[4856]: I0126 17:01:18.703639 4856 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-26T17:01:17.82385409Z","Handler":null,"Name":""} Jan 26 17:01:18 crc kubenswrapper[4856]: I0126 17:01:18.755830 4856 patch_prober.go:28] interesting pod/apiserver-76f77b778f-6rlxp container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 26 17:01:18 crc kubenswrapper[4856]: [+]log ok Jan 26 17:01:18 crc kubenswrapper[4856]: [+]etcd ok Jan 26 17:01:18 crc kubenswrapper[4856]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 26 17:01:18 crc kubenswrapper[4856]: [+]poststarthook/generic-apiserver-start-informers ok Jan 26 17:01:18 crc kubenswrapper[4856]: [+]poststarthook/max-in-flight-filter ok Jan 26 17:01:18 crc kubenswrapper[4856]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 26 17:01:18 crc kubenswrapper[4856]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 26 17:01:18 crc kubenswrapper[4856]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 26 17:01:18 crc kubenswrapper[4856]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 26 17:01:18 crc kubenswrapper[4856]: [+]poststarthook/project.openshift.io-projectcache ok Jan 26 17:01:18 crc kubenswrapper[4856]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 26 17:01:18 crc kubenswrapper[4856]: [+]poststarthook/openshift.io-startinformers ok Jan 26 17:01:18 crc kubenswrapper[4856]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 26 17:01:18 crc kubenswrapper[4856]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 26 17:01:18 crc kubenswrapper[4856]: livez check failed Jan 26 17:01:18 crc kubenswrapper[4856]: I0126 17:01:18.756148 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" podUID="a6d331bd-2db3-4319-9f5c-db56d408d9e3" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:01:18 crc kubenswrapper[4856]: I0126 17:01:18.792315 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:18 crc kubenswrapper[4856]: E0126 17:01:18.792666 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 17:01:19.292632239 +0000 UTC m=+175.245886220 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wxbdh" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:18 crc kubenswrapper[4856]: I0126 17:01:18.893114 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:18 crc kubenswrapper[4856]: E0126 17:01:18.893762 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 17:01:19.39373925 +0000 UTC m=+175.346993231 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 17:01:18 crc kubenswrapper[4856]: I0126 17:01:18.911819 4856 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 26 17:01:18 crc kubenswrapper[4856]: I0126 17:01:18.911874 4856 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 26 17:01:18 crc kubenswrapper[4856]: I0126 17:01:18.972672 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 17:01:18 crc kubenswrapper[4856]: I0126 17:01:18.985614 4856 generic.go:334] "Generic (PLEG): container finished" podID="d8c22047-144c-402a-80c5-c206539b6826" containerID="1ebaf120a54fd03aee32564585380d095168008ce25b9456bb3082fadde275b6" exitCode=0 Jan 26 17:01:18 crc kubenswrapper[4856]: I0126 17:01:18.985764 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d8c22047-144c-402a-80c5-c206539b6826","Type":"ContainerDied","Data":"1ebaf120a54fd03aee32564585380d095168008ce25b9456bb3082fadde275b6"} Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.008212 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56258114-1bee-4516-ab71-f60d15a9635d-kube-api-access\") pod \"56258114-1bee-4516-ab71-f60d15a9635d\" (UID: \"56258114-1bee-4516-ab71-f60d15a9635d\") " Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.008399 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56258114-1bee-4516-ab71-f60d15a9635d-kubelet-dir\") pod \"56258114-1bee-4516-ab71-f60d15a9635d\" (UID: \"56258114-1bee-4516-ab71-f60d15a9635d\") " Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.008587 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.010313 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56258114-1bee-4516-ab71-f60d15a9635d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "56258114-1bee-4516-ab71-f60d15a9635d" (UID: "56258114-1bee-4516-ab71-f60d15a9635d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.024596 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.024599 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"56258114-1bee-4516-ab71-f60d15a9635d","Type":"ContainerDied","Data":"02721e310b92647ac526a2662cb5ceecb1c039ddbbb08ec7de0c0bb07775f5b7"} Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.024653 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02721e310b92647ac526a2662cb5ceecb1c039ddbbb08ec7de0c0bb07775f5b7" Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.030909 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56258114-1bee-4516-ab71-f60d15a9635d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "56258114-1bee-4516-ab71-f60d15a9635d" (UID: "56258114-1bee-4516-ab71-f60d15a9635d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.057143 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6cghs" Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.063437 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-txmdl"] Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.136398 4856 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56258114-1bee-4516-ab71-f60d15a9635d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.136423 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56258114-1bee-4516-ab71-f60d15a9635d-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.138629 4856 patch_prober.go:28] interesting pod/router-default-5444994796-h9b2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:01:19 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Jan 26 17:01:19 crc kubenswrapper[4856]: [+]process-running ok Jan 26 17:01:19 crc kubenswrapper[4856]: healthz check failed Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.138687 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9b2g" podUID="85f05bd5-ff83-4d29-9531-ab3499088095" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.151906 4856 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.151975 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.190998 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n8hp2"] Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.337690 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-62nhd"] Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.340334 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wxbdh\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.422093 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.456147 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.495735 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4kwt4"] Jan 26 17:01:19 crc kubenswrapper[4856]: E0126 17:01:19.496113 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56258114-1bee-4516-ab71-f60d15a9635d" containerName="pruner" Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.496137 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="56258114-1bee-4516-ab71-f60d15a9635d" containerName="pruner" Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.496274 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="56258114-1bee-4516-ab71-f60d15a9635d" containerName="pruner" Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.497356 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kwt4" Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.515709 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.521245 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qgjjd"] Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.534304 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4kwt4"] Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.625416 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.630511 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lndnt"] Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.630820 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-lndnt" podUID="1afc0f4c-e02d-4a70-aaba-e761e8c04eee" containerName="controller-manager" containerID="cri-o://e9e54e2a4a2266ca4148b11cb38df08f87c2f2ccd87dc3343d147862786c16e2" gracePeriod=30 Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.650975 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fpqvc"] Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.651243 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fpqvc" podUID="5fe6baed-ab97-4d8a-8be2-6f00f9698136" containerName="route-controller-manager" containerID="cri-o://56133c3e036efeb9590dc043f9b9af766fce603e2c50cfdca46be37466b88f62" gracePeriod=30 Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.659480 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6944fc9-b8d7-4013-8702-b5765c410a0b-catalog-content\") pod \"redhat-marketplace-4kwt4\" (UID: \"d6944fc9-b8d7-4013-8702-b5765c410a0b\") " pod="openshift-marketplace/redhat-marketplace-4kwt4" Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.659562 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qzgz\" (UniqueName: \"kubernetes.io/projected/d6944fc9-b8d7-4013-8702-b5765c410a0b-kube-api-access-2qzgz\") pod \"redhat-marketplace-4kwt4\" (UID: \"d6944fc9-b8d7-4013-8702-b5765c410a0b\") " pod="openshift-marketplace/redhat-marketplace-4kwt4" Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.659592 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6944fc9-b8d7-4013-8702-b5765c410a0b-utilities\") pod \"redhat-marketplace-4kwt4\" (UID: \"d6944fc9-b8d7-4013-8702-b5765c410a0b\") " pod="openshift-marketplace/redhat-marketplace-4kwt4" Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.674391 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fpqvc" Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.764555 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6944fc9-b8d7-4013-8702-b5765c410a0b-catalog-content\") pod \"redhat-marketplace-4kwt4\" (UID: \"d6944fc9-b8d7-4013-8702-b5765c410a0b\") " pod="openshift-marketplace/redhat-marketplace-4kwt4" Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.764643 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qzgz\" (UniqueName: \"kubernetes.io/projected/d6944fc9-b8d7-4013-8702-b5765c410a0b-kube-api-access-2qzgz\") pod \"redhat-marketplace-4kwt4\" (UID: \"d6944fc9-b8d7-4013-8702-b5765c410a0b\") " pod="openshift-marketplace/redhat-marketplace-4kwt4" Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.764687 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6944fc9-b8d7-4013-8702-b5765c410a0b-utilities\") pod \"redhat-marketplace-4kwt4\" (UID: \"d6944fc9-b8d7-4013-8702-b5765c410a0b\") " pod="openshift-marketplace/redhat-marketplace-4kwt4" Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.765329 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6944fc9-b8d7-4013-8702-b5765c410a0b-utilities\") pod \"redhat-marketplace-4kwt4\" (UID: \"d6944fc9-b8d7-4013-8702-b5765c410a0b\") " pod="openshift-marketplace/redhat-marketplace-4kwt4" Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.765618 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6944fc9-b8d7-4013-8702-b5765c410a0b-catalog-content\") pod \"redhat-marketplace-4kwt4\" (UID: \"d6944fc9-b8d7-4013-8702-b5765c410a0b\") " pod="openshift-marketplace/redhat-marketplace-4kwt4" Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.791069 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-295wr"] Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.824125 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qzgz\" (UniqueName: \"kubernetes.io/projected/d6944fc9-b8d7-4013-8702-b5765c410a0b-kube-api-access-2qzgz\") pod \"redhat-marketplace-4kwt4\" (UID: \"d6944fc9-b8d7-4013-8702-b5765c410a0b\") " pod="openshift-marketplace/redhat-marketplace-4kwt4" Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.850873 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g8bgt"] Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.851932 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g8bgt" Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.892913 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kwt4" Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.942392 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g8bgt"] Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.967202 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcbvc\" (UniqueName: \"kubernetes.io/projected/0d7eb7b8-63ae-493a-850b-0b9f3b42e927-kube-api-access-dcbvc\") pod \"redhat-marketplace-g8bgt\" (UID: \"0d7eb7b8-63ae-493a-850b-0b9f3b42e927\") " pod="openshift-marketplace/redhat-marketplace-g8bgt" Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.967267 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d7eb7b8-63ae-493a-850b-0b9f3b42e927-utilities\") pod \"redhat-marketplace-g8bgt\" (UID: \"0d7eb7b8-63ae-493a-850b-0b9f3b42e927\") " pod="openshift-marketplace/redhat-marketplace-g8bgt" Jan 26 17:01:19 crc kubenswrapper[4856]: I0126 17:01:19.967324 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d7eb7b8-63ae-493a-850b-0b9f3b42e927-catalog-content\") pod \"redhat-marketplace-g8bgt\" (UID: \"0d7eb7b8-63ae-493a-850b-0b9f3b42e927\") " pod="openshift-marketplace/redhat-marketplace-g8bgt" Jan 26 17:01:20 crc kubenswrapper[4856]: I0126 17:01:20.023226 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wxbdh"] Jan 26 17:01:20 crc kubenswrapper[4856]: I0126 17:01:20.041663 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n8hp2" event={"ID":"a6086d4b-faeb-4a12-8e6a-2a178dfe374c","Type":"ContainerStarted","Data":"61bc611402534dad5a09a8edd4e25038026dc1769890ea9d2407a69eb9c888af"} Jan 26 17:01:20 crc kubenswrapper[4856]: I0126 17:01:20.045328 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-295wr" event={"ID":"12e50462-28e6-4531-ada4-e652310e6cce","Type":"ContainerStarted","Data":"55ca52361a33fc96ae9820708b118cc821392b787ebe52650ff20c003ce403f1"} Jan 26 17:01:20 crc kubenswrapper[4856]: I0126 17:01:20.054715 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txmdl" event={"ID":"40a27476-22b1-4083-990e-66e70ccdaf4c","Type":"ContainerStarted","Data":"894929ba59d66c867404dc7094d1e4c1b977bab79b099140b34c889e7b66ae16"} Jan 26 17:01:20 crc kubenswrapper[4856]: I0126 17:01:20.059653 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgjjd" event={"ID":"89cf05de-642b-4574-9f79-45e7a3d4afa3","Type":"ContainerStarted","Data":"e31b957fac8983059a89e5a7867c6294be7613d1c35b810e6c7face168eea509"} Jan 26 17:01:20 crc kubenswrapper[4856]: I0126 17:01:20.063852 4856 patch_prober.go:28] interesting pod/router-default-5444994796-h9b2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:01:20 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Jan 26 17:01:20 crc kubenswrapper[4856]: [+]process-running ok Jan 26 17:01:20 crc kubenswrapper[4856]: healthz check failed Jan 26 17:01:20 crc kubenswrapper[4856]: I0126 17:01:20.063890 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9b2g" podUID="85f05bd5-ff83-4d29-9531-ab3499088095" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:01:20 crc kubenswrapper[4856]: I0126 17:01:20.064131 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62nhd" event={"ID":"7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766","Type":"ContainerStarted","Data":"54783f51c7d33737624b9dffb5983a3ed107d31d30f3fd03ab73e5627dfd4bfd"} Jan 26 17:01:20 crc kubenswrapper[4856]: I0126 17:01:20.068924 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d7eb7b8-63ae-493a-850b-0b9f3b42e927-catalog-content\") pod \"redhat-marketplace-g8bgt\" (UID: \"0d7eb7b8-63ae-493a-850b-0b9f3b42e927\") " pod="openshift-marketplace/redhat-marketplace-g8bgt" Jan 26 17:01:20 crc kubenswrapper[4856]: I0126 17:01:20.069032 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcbvc\" (UniqueName: \"kubernetes.io/projected/0d7eb7b8-63ae-493a-850b-0b9f3b42e927-kube-api-access-dcbvc\") pod \"redhat-marketplace-g8bgt\" (UID: \"0d7eb7b8-63ae-493a-850b-0b9f3b42e927\") " pod="openshift-marketplace/redhat-marketplace-g8bgt" Jan 26 17:01:20 crc kubenswrapper[4856]: I0126 17:01:20.069059 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d7eb7b8-63ae-493a-850b-0b9f3b42e927-utilities\") pod \"redhat-marketplace-g8bgt\" (UID: \"0d7eb7b8-63ae-493a-850b-0b9f3b42e927\") " pod="openshift-marketplace/redhat-marketplace-g8bgt" Jan 26 17:01:20 crc kubenswrapper[4856]: I0126 17:01:20.069553 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d7eb7b8-63ae-493a-850b-0b9f3b42e927-utilities\") pod \"redhat-marketplace-g8bgt\" (UID: \"0d7eb7b8-63ae-493a-850b-0b9f3b42e927\") " pod="openshift-marketplace/redhat-marketplace-g8bgt" Jan 26 17:01:20 crc kubenswrapper[4856]: I0126 17:01:20.069786 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d7eb7b8-63ae-493a-850b-0b9f3b42e927-catalog-content\") pod \"redhat-marketplace-g8bgt\" (UID: \"0d7eb7b8-63ae-493a-850b-0b9f3b42e927\") " pod="openshift-marketplace/redhat-marketplace-g8bgt" Jan 26 17:01:20 crc kubenswrapper[4856]: I0126 17:01:20.117552 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcbvc\" (UniqueName: \"kubernetes.io/projected/0d7eb7b8-63ae-493a-850b-0b9f3b42e927-kube-api-access-dcbvc\") pod \"redhat-marketplace-g8bgt\" (UID: \"0d7eb7b8-63ae-493a-850b-0b9f3b42e927\") " pod="openshift-marketplace/redhat-marketplace-g8bgt" Jan 26 17:01:20 crc kubenswrapper[4856]: I0126 17:01:20.220289 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g8bgt" Jan 26 17:01:20 crc kubenswrapper[4856]: I0126 17:01:20.294087 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4kwt4"] Jan 26 17:01:20 crc kubenswrapper[4856]: W0126 17:01:20.369916 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6944fc9_b8d7_4013_8702_b5765c410a0b.slice/crio-c6a85642ee783cdf59dd26ba744cc42773e760d42354900c16ebdd5e8e9ec111 WatchSource:0}: Error finding container c6a85642ee783cdf59dd26ba744cc42773e760d42354900c16ebdd5e8e9ec111: Status 404 returned error can't find the container with id c6a85642ee783cdf59dd26ba744cc42773e760d42354900c16ebdd5e8e9ec111 Jan 26 17:01:20 crc kubenswrapper[4856]: I0126 17:01:20.375993 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 17:01:20 crc kubenswrapper[4856]: I0126 17:01:20.474180 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d8c22047-144c-402a-80c5-c206539b6826-kube-api-access\") pod \"d8c22047-144c-402a-80c5-c206539b6826\" (UID: \"d8c22047-144c-402a-80c5-c206539b6826\") " Jan 26 17:01:20 crc kubenswrapper[4856]: I0126 17:01:20.474232 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d8c22047-144c-402a-80c5-c206539b6826-kubelet-dir\") pod \"d8c22047-144c-402a-80c5-c206539b6826\" (UID: \"d8c22047-144c-402a-80c5-c206539b6826\") " Jan 26 17:01:20 crc kubenswrapper[4856]: I0126 17:01:20.474734 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8c22047-144c-402a-80c5-c206539b6826-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d8c22047-144c-402a-80c5-c206539b6826" (UID: "d8c22047-144c-402a-80c5-c206539b6826"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:01:20 crc kubenswrapper[4856]: I0126 17:01:20.481782 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8c22047-144c-402a-80c5-c206539b6826-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d8c22047-144c-402a-80c5-c206539b6826" (UID: "d8c22047-144c-402a-80c5-c206539b6826"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:01:20 crc kubenswrapper[4856]: I0126 17:01:20.576293 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d8c22047-144c-402a-80c5-c206539b6826-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 17:01:20 crc kubenswrapper[4856]: I0126 17:01:20.576329 4856 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d8c22047-144c-402a-80c5-c206539b6826-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 17:01:20 crc kubenswrapper[4856]: I0126 17:01:20.579981 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g8bgt"] Jan 26 17:01:20 crc kubenswrapper[4856]: I0126 17:01:20.847003 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qknj9"] Jan 26 17:01:20 crc kubenswrapper[4856]: E0126 17:01:20.847607 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8c22047-144c-402a-80c5-c206539b6826" containerName="pruner" Jan 26 17:01:20 crc kubenswrapper[4856]: I0126 17:01:20.847625 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8c22047-144c-402a-80c5-c206539b6826" containerName="pruner" Jan 26 17:01:20 crc kubenswrapper[4856]: I0126 17:01:20.847797 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8c22047-144c-402a-80c5-c206539b6826" containerName="pruner" Jan 26 17:01:20 crc kubenswrapper[4856]: I0126 17:01:20.848515 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qknj9" Jan 26 17:01:20 crc kubenswrapper[4856]: I0126 17:01:20.853626 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 17:01:20 crc kubenswrapper[4856]: I0126 17:01:20.864201 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qknj9"] Jan 26 17:01:20 crc kubenswrapper[4856]: I0126 17:01:20.915420 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fpqvc" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.006501 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3fa94fe-e4ad-4171-b853-89878dc61569-utilities\") pod \"redhat-operators-qknj9\" (UID: \"a3fa94fe-e4ad-4171-b853-89878dc61569\") " pod="openshift-marketplace/redhat-operators-qknj9" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.006691 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3fa94fe-e4ad-4171-b853-89878dc61569-catalog-content\") pod \"redhat-operators-qknj9\" (UID: \"a3fa94fe-e4ad-4171-b853-89878dc61569\") " pod="openshift-marketplace/redhat-operators-qknj9" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.006731 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvpzb\" (UniqueName: \"kubernetes.io/projected/a3fa94fe-e4ad-4171-b853-89878dc61569-kube-api-access-wvpzb\") pod \"redhat-operators-qknj9\" (UID: \"a3fa94fe-e4ad-4171-b853-89878dc61569\") " pod="openshift-marketplace/redhat-operators-qknj9" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.064789 4856 patch_prober.go:28] interesting pod/router-default-5444994796-h9b2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:01:21 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Jan 26 17:01:21 crc kubenswrapper[4856]: [+]process-running ok Jan 26 17:01:21 crc kubenswrapper[4856]: healthz check failed Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.065199 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9b2g" podUID="85f05bd5-ff83-4d29-9531-ab3499088095" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.080137 4856 generic.go:334] "Generic (PLEG): container finished" podID="d6944fc9-b8d7-4013-8702-b5765c410a0b" containerID="d4ffeb43e14865bfef28f884de6e5301087c2d9158d7a77b0c10a8dfec7c7ce2" exitCode=0 Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.080214 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kwt4" event={"ID":"d6944fc9-b8d7-4013-8702-b5765c410a0b","Type":"ContainerDied","Data":"d4ffeb43e14865bfef28f884de6e5301087c2d9158d7a77b0c10a8dfec7c7ce2"} Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.080238 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kwt4" event={"ID":"d6944fc9-b8d7-4013-8702-b5765c410a0b","Type":"ContainerStarted","Data":"c6a85642ee783cdf59dd26ba744cc42773e760d42354900c16ebdd5e8e9ec111"} Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.082000 4856 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.083151 4856 generic.go:334] "Generic (PLEG): container finished" podID="40a27476-22b1-4083-990e-66e70ccdaf4c" containerID="3ec09320bb48de5d8b6709469f0f84953408cf650f51d872373c21616d43f0de" exitCode=0 Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.083236 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txmdl" event={"ID":"40a27476-22b1-4083-990e-66e70ccdaf4c","Type":"ContainerDied","Data":"3ec09320bb48de5d8b6709469f0f84953408cf650f51d872373c21616d43f0de"} Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.087194 4856 generic.go:334] "Generic (PLEG): container finished" podID="89cf05de-642b-4574-9f79-45e7a3d4afa3" containerID="de3e1fd7d5b6adab2150705e57df43577251e5278edb52956bb11f5539b1538a" exitCode=0 Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.087912 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgjjd" event={"ID":"89cf05de-642b-4574-9f79-45e7a3d4afa3","Type":"ContainerDied","Data":"de3e1fd7d5b6adab2150705e57df43577251e5278edb52956bb11f5539b1538a"} Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.095240 4856 generic.go:334] "Generic (PLEG): container finished" podID="0d7eb7b8-63ae-493a-850b-0b9f3b42e927" containerID="a9fe692a78995f7dad7ea556edacc772eb429ab92938195725add9a17bbe9e7c" exitCode=0 Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.095422 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8bgt" event={"ID":"0d7eb7b8-63ae-493a-850b-0b9f3b42e927","Type":"ContainerDied","Data":"a9fe692a78995f7dad7ea556edacc772eb429ab92938195725add9a17bbe9e7c"} Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.096558 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8bgt" event={"ID":"0d7eb7b8-63ae-493a-850b-0b9f3b42e927","Type":"ContainerStarted","Data":"eca9c93c5c35ce3c6c300c833124d2e0c4c40f4feaf4a45bd12b4eecdb2f116c"} Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.098845 4856 generic.go:334] "Generic (PLEG): container finished" podID="5fe6baed-ab97-4d8a-8be2-6f00f9698136" containerID="56133c3e036efeb9590dc043f9b9af766fce603e2c50cfdca46be37466b88f62" exitCode=0 Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.098918 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fpqvc" event={"ID":"5fe6baed-ab97-4d8a-8be2-6f00f9698136","Type":"ContainerDied","Data":"56133c3e036efeb9590dc043f9b9af766fce603e2c50cfdca46be37466b88f62"} Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.098960 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fpqvc" event={"ID":"5fe6baed-ab97-4d8a-8be2-6f00f9698136","Type":"ContainerDied","Data":"e3a4f0c156036789efac8b4cdbd3ace5dcdaf8c187d261687c3b9c87a15d74df"} Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.099000 4856 scope.go:117] "RemoveContainer" containerID="56133c3e036efeb9590dc043f9b9af766fce603e2c50cfdca46be37466b88f62" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.099121 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fpqvc" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.107173 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fe6baed-ab97-4d8a-8be2-6f00f9698136-config\") pod \"5fe6baed-ab97-4d8a-8be2-6f00f9698136\" (UID: \"5fe6baed-ab97-4d8a-8be2-6f00f9698136\") " Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.107214 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5fe6baed-ab97-4d8a-8be2-6f00f9698136-client-ca\") pod \"5fe6baed-ab97-4d8a-8be2-6f00f9698136\" (UID: \"5fe6baed-ab97-4d8a-8be2-6f00f9698136\") " Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.107240 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpfwk\" (UniqueName: \"kubernetes.io/projected/5fe6baed-ab97-4d8a-8be2-6f00f9698136-kube-api-access-hpfwk\") pod \"5fe6baed-ab97-4d8a-8be2-6f00f9698136\" (UID: \"5fe6baed-ab97-4d8a-8be2-6f00f9698136\") " Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.107308 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fe6baed-ab97-4d8a-8be2-6f00f9698136-serving-cert\") pod \"5fe6baed-ab97-4d8a-8be2-6f00f9698136\" (UID: \"5fe6baed-ab97-4d8a-8be2-6f00f9698136\") " Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.107541 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3fa94fe-e4ad-4171-b853-89878dc61569-catalog-content\") pod \"redhat-operators-qknj9\" (UID: \"a3fa94fe-e4ad-4171-b853-89878dc61569\") " pod="openshift-marketplace/redhat-operators-qknj9" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.107568 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvpzb\" (UniqueName: \"kubernetes.io/projected/a3fa94fe-e4ad-4171-b853-89878dc61569-kube-api-access-wvpzb\") pod \"redhat-operators-qknj9\" (UID: \"a3fa94fe-e4ad-4171-b853-89878dc61569\") " pod="openshift-marketplace/redhat-operators-qknj9" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.107587 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3fa94fe-e4ad-4171-b853-89878dc61569-utilities\") pod \"redhat-operators-qknj9\" (UID: \"a3fa94fe-e4ad-4171-b853-89878dc61569\") " pod="openshift-marketplace/redhat-operators-qknj9" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.108038 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3fa94fe-e4ad-4171-b853-89878dc61569-utilities\") pod \"redhat-operators-qknj9\" (UID: \"a3fa94fe-e4ad-4171-b853-89878dc61569\") " pod="openshift-marketplace/redhat-operators-qknj9" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.110265 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fe6baed-ab97-4d8a-8be2-6f00f9698136-config" (OuterVolumeSpecName: "config") pod "5fe6baed-ab97-4d8a-8be2-6f00f9698136" (UID: "5fe6baed-ab97-4d8a-8be2-6f00f9698136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.110711 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fe6baed-ab97-4d8a-8be2-6f00f9698136-client-ca" (OuterVolumeSpecName: "client-ca") pod "5fe6baed-ab97-4d8a-8be2-6f00f9698136" (UID: "5fe6baed-ab97-4d8a-8be2-6f00f9698136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.111380 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3fa94fe-e4ad-4171-b853-89878dc61569-catalog-content\") pod \"redhat-operators-qknj9\" (UID: \"a3fa94fe-e4ad-4171-b853-89878dc61569\") " pod="openshift-marketplace/redhat-operators-qknj9" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.113515 4856 generic.go:334] "Generic (PLEG): container finished" podID="a6086d4b-faeb-4a12-8e6a-2a178dfe374c" containerID="5638f22e046bc8f28ee2834fa7820e942af58e17d4efe952168ca98e63b3fa12" exitCode=0 Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.113668 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n8hp2" event={"ID":"a6086d4b-faeb-4a12-8e6a-2a178dfe374c","Type":"ContainerDied","Data":"5638f22e046bc8f28ee2834fa7820e942af58e17d4efe952168ca98e63b3fa12"} Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.117039 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-295wr" event={"ID":"12e50462-28e6-4531-ada4-e652310e6cce","Type":"ContainerStarted","Data":"5868fcdab59505042d1235014aa1685fbf9cc65d23b45066a7b3729afc514f50"} Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.117088 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-295wr" event={"ID":"12e50462-28e6-4531-ada4-e652310e6cce","Type":"ContainerStarted","Data":"7de6d5aef139d7d3e970838bf0c7a91c17246160a4fbf31d318d10f88ebf2901"} Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.118690 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe6baed-ab97-4d8a-8be2-6f00f9698136-kube-api-access-hpfwk" (OuterVolumeSpecName: "kube-api-access-hpfwk") pod "5fe6baed-ab97-4d8a-8be2-6f00f9698136" (UID: "5fe6baed-ab97-4d8a-8be2-6f00f9698136"). InnerVolumeSpecName "kube-api-access-hpfwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.119569 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe6baed-ab97-4d8a-8be2-6f00f9698136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5fe6baed-ab97-4d8a-8be2-6f00f9698136" (UID: "5fe6baed-ab97-4d8a-8be2-6f00f9698136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.122135 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" event={"ID":"cfa40861-cc08-4145-a185-6a3fb07eaabe","Type":"ContainerStarted","Data":"fc8e05e1e87fe66232302aff71c23d6b6c36b366751f113f41815a46bc948eb9"} Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.122171 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" event={"ID":"cfa40861-cc08-4145-a185-6a3fb07eaabe","Type":"ContainerStarted","Data":"ae7df2de181ac684cadd8c52c3b8878c72703f16549d24e92a2fc45b186ce717"} Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.122267 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.129994 4856 generic.go:334] "Generic (PLEG): container finished" podID="1afc0f4c-e02d-4a70-aaba-e761e8c04eee" containerID="e9e54e2a4a2266ca4148b11cb38df08f87c2f2ccd87dc3343d147862786c16e2" exitCode=0 Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.130121 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lndnt" event={"ID":"1afc0f4c-e02d-4a70-aaba-e761e8c04eee","Type":"ContainerDied","Data":"e9e54e2a4a2266ca4148b11cb38df08f87c2f2ccd87dc3343d147862786c16e2"} Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.133216 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d8c22047-144c-402a-80c5-c206539b6826","Type":"ContainerDied","Data":"eb80e342e945ab720a056481e6d379636786d87ebb81ecfa7bcd84ffb36388ff"} Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.133245 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb80e342e945ab720a056481e6d379636786d87ebb81ecfa7bcd84ffb36388ff" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.133308 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.143611 4856 generic.go:334] "Generic (PLEG): container finished" podID="7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766" containerID="f96de8f882682ea8e5a30970c1ce8d34c4b60cb434e13968e3bd6879b62b071b" exitCode=0 Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.143669 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62nhd" event={"ID":"7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766","Type":"ContainerDied","Data":"f96de8f882682ea8e5a30970c1ce8d34c4b60cb434e13968e3bd6879b62b071b"} Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.147461 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvpzb\" (UniqueName: \"kubernetes.io/projected/a3fa94fe-e4ad-4171-b853-89878dc61569-kube-api-access-wvpzb\") pod \"redhat-operators-qknj9\" (UID: \"a3fa94fe-e4ad-4171-b853-89878dc61569\") " pod="openshift-marketplace/redhat-operators-qknj9" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.158962 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lndnt" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.160304 4856 scope.go:117] "RemoveContainer" containerID="56133c3e036efeb9590dc043f9b9af766fce603e2c50cfdca46be37466b88f62" Jan 26 17:01:21 crc kubenswrapper[4856]: E0126 17:01:21.160654 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56133c3e036efeb9590dc043f9b9af766fce603e2c50cfdca46be37466b88f62\": container with ID starting with 56133c3e036efeb9590dc043f9b9af766fce603e2c50cfdca46be37466b88f62 not found: ID does not exist" containerID="56133c3e036efeb9590dc043f9b9af766fce603e2c50cfdca46be37466b88f62" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.160694 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56133c3e036efeb9590dc043f9b9af766fce603e2c50cfdca46be37466b88f62"} err="failed to get container status \"56133c3e036efeb9590dc043f9b9af766fce603e2c50cfdca46be37466b88f62\": rpc error: code = NotFound desc = could not find container \"56133c3e036efeb9590dc043f9b9af766fce603e2c50cfdca46be37466b88f62\": container with ID starting with 56133c3e036efeb9590dc043f9b9af766fce603e2c50cfdca46be37466b88f62 not found: ID does not exist" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.209038 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fe6baed-ab97-4d8a-8be2-6f00f9698136-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.209076 4856 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5fe6baed-ab97-4d8a-8be2-6f00f9698136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.209088 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpfwk\" (UniqueName: \"kubernetes.io/projected/5fe6baed-ab97-4d8a-8be2-6f00f9698136-kube-api-access-hpfwk\") on node \"crc\" DevicePath \"\"" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.209100 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fe6baed-ab97-4d8a-8be2-6f00f9698136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.228805 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qknj9" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.261128 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5457c9f8cf-lcqcl"] Jan 26 17:01:21 crc kubenswrapper[4856]: E0126 17:01:21.261364 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fe6baed-ab97-4d8a-8be2-6f00f9698136" containerName="route-controller-manager" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.261375 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fe6baed-ab97-4d8a-8be2-6f00f9698136" containerName="route-controller-manager" Jan 26 17:01:21 crc kubenswrapper[4856]: E0126 17:01:21.261391 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1afc0f4c-e02d-4a70-aaba-e761e8c04eee" containerName="controller-manager" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.261397 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="1afc0f4c-e02d-4a70-aaba-e761e8c04eee" containerName="controller-manager" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.261501 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fe6baed-ab97-4d8a-8be2-6f00f9698136" containerName="route-controller-manager" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.261514 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="1afc0f4c-e02d-4a70-aaba-e761e8c04eee" containerName="controller-manager" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.261854 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cb449784d-bqprm"] Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.271877 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cb449784d-bqprm" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.272329 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5457c9f8cf-lcqcl" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.304377 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mqxwf"] Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.310992 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mqxwf" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.313626 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1afc0f4c-e02d-4a70-aaba-e761e8c04eee-client-ca\") pod \"1afc0f4c-e02d-4a70-aaba-e761e8c04eee\" (UID: \"1afc0f4c-e02d-4a70-aaba-e761e8c04eee\") " Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.314206 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mr8gn\" (UniqueName: \"kubernetes.io/projected/1afc0f4c-e02d-4a70-aaba-e761e8c04eee-kube-api-access-mr8gn\") pod \"1afc0f4c-e02d-4a70-aaba-e761e8c04eee\" (UID: \"1afc0f4c-e02d-4a70-aaba-e761e8c04eee\") " Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.314250 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1afc0f4c-e02d-4a70-aaba-e761e8c04eee-proxy-ca-bundles\") pod \"1afc0f4c-e02d-4a70-aaba-e761e8c04eee\" (UID: \"1afc0f4c-e02d-4a70-aaba-e761e8c04eee\") " Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.315155 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1afc0f4c-e02d-4a70-aaba-e761e8c04eee-serving-cert\") pod \"1afc0f4c-e02d-4a70-aaba-e761e8c04eee\" (UID: \"1afc0f4c-e02d-4a70-aaba-e761e8c04eee\") " Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.315654 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1afc0f4c-e02d-4a70-aaba-e761e8c04eee-config\") pod \"1afc0f4c-e02d-4a70-aaba-e761e8c04eee\" (UID: \"1afc0f4c-e02d-4a70-aaba-e761e8c04eee\") " Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.316270 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-295wr" podStartSLOduration=147.316251539 podStartE2EDuration="2m27.316251539s" podCreationTimestamp="2026-01-26 16:58:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:01:21.273159958 +0000 UTC m=+177.226413939" watchObservedRunningTime="2026-01-26 17:01:21.316251539 +0000 UTC m=+177.269505520" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.319585 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1afc0f4c-e02d-4a70-aaba-e761e8c04eee-client-ca" (OuterVolumeSpecName: "client-ca") pod "1afc0f4c-e02d-4a70-aaba-e761e8c04eee" (UID: "1afc0f4c-e02d-4a70-aaba-e761e8c04eee"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.321073 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1afc0f4c-e02d-4a70-aaba-e761e8c04eee-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1afc0f4c-e02d-4a70-aaba-e761e8c04eee" (UID: "1afc0f4c-e02d-4a70-aaba-e761e8c04eee"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.322181 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1afc0f4c-e02d-4a70-aaba-e761e8c04eee-config" (OuterVolumeSpecName: "config") pod "1afc0f4c-e02d-4a70-aaba-e761e8c04eee" (UID: "1afc0f4c-e02d-4a70-aaba-e761e8c04eee"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.334057 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1afc0f4c-e02d-4a70-aaba-e761e8c04eee-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1afc0f4c-e02d-4a70-aaba-e761e8c04eee" (UID: "1afc0f4c-e02d-4a70-aaba-e761e8c04eee"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.339691 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5457c9f8cf-lcqcl"] Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.351919 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mqxwf"] Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.357645 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1afc0f4c-e02d-4a70-aaba-e761e8c04eee-kube-api-access-mr8gn" (OuterVolumeSpecName: "kube-api-access-mr8gn") pod "1afc0f4c-e02d-4a70-aaba-e761e8c04eee" (UID: "1afc0f4c-e02d-4a70-aaba-e761e8c04eee"). InnerVolumeSpecName "kube-api-access-mr8gn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.362133 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" podStartSLOduration=147.362102271 podStartE2EDuration="2m27.362102271s" podCreationTimestamp="2026-01-26 16:58:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:01:21.333028304 +0000 UTC m=+177.286282285" watchObservedRunningTime="2026-01-26 17:01:21.362102271 +0000 UTC m=+177.315356262" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.396599 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cb449784d-bqprm"] Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.420604 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vhl4\" (UniqueName: \"kubernetes.io/projected/54dee8cd-259a-4c9f-9e56-fbd0ea167f46-kube-api-access-9vhl4\") pod \"route-controller-manager-cb449784d-bqprm\" (UID: \"54dee8cd-259a-4c9f-9e56-fbd0ea167f46\") " pod="openshift-route-controller-manager/route-controller-manager-cb449784d-bqprm" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.420649 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c71e219-35d7-4e1e-a371-3456dfd29e83-utilities\") pod \"redhat-operators-mqxwf\" (UID: \"9c71e219-35d7-4e1e-a371-3456dfd29e83\") " pod="openshift-marketplace/redhat-operators-mqxwf" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.420671 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sjtb\" (UniqueName: \"kubernetes.io/projected/50b63435-18c4-4fcb-821c-3d88abc7b728-kube-api-access-4sjtb\") pod \"controller-manager-5457c9f8cf-lcqcl\" (UID: \"50b63435-18c4-4fcb-821c-3d88abc7b728\") " pod="openshift-controller-manager/controller-manager-5457c9f8cf-lcqcl" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.420694 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c71e219-35d7-4e1e-a371-3456dfd29e83-catalog-content\") pod \"redhat-operators-mqxwf\" (UID: \"9c71e219-35d7-4e1e-a371-3456dfd29e83\") " pod="openshift-marketplace/redhat-operators-mqxwf" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.420713 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6swzw\" (UniqueName: \"kubernetes.io/projected/9c71e219-35d7-4e1e-a371-3456dfd29e83-kube-api-access-6swzw\") pod \"redhat-operators-mqxwf\" (UID: \"9c71e219-35d7-4e1e-a371-3456dfd29e83\") " pod="openshift-marketplace/redhat-operators-mqxwf" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.420742 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/50b63435-18c4-4fcb-821c-3d88abc7b728-proxy-ca-bundles\") pod \"controller-manager-5457c9f8cf-lcqcl\" (UID: \"50b63435-18c4-4fcb-821c-3d88abc7b728\") " pod="openshift-controller-manager/controller-manager-5457c9f8cf-lcqcl" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.420769 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54dee8cd-259a-4c9f-9e56-fbd0ea167f46-client-ca\") pod \"route-controller-manager-cb449784d-bqprm\" (UID: \"54dee8cd-259a-4c9f-9e56-fbd0ea167f46\") " pod="openshift-route-controller-manager/route-controller-manager-cb449784d-bqprm" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.420791 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50b63435-18c4-4fcb-821c-3d88abc7b728-serving-cert\") pod \"controller-manager-5457c9f8cf-lcqcl\" (UID: \"50b63435-18c4-4fcb-821c-3d88abc7b728\") " pod="openshift-controller-manager/controller-manager-5457c9f8cf-lcqcl" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.420810 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54dee8cd-259a-4c9f-9e56-fbd0ea167f46-serving-cert\") pod \"route-controller-manager-cb449784d-bqprm\" (UID: \"54dee8cd-259a-4c9f-9e56-fbd0ea167f46\") " pod="openshift-route-controller-manager/route-controller-manager-cb449784d-bqprm" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.420835 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50b63435-18c4-4fcb-821c-3d88abc7b728-client-ca\") pod \"controller-manager-5457c9f8cf-lcqcl\" (UID: \"50b63435-18c4-4fcb-821c-3d88abc7b728\") " pod="openshift-controller-manager/controller-manager-5457c9f8cf-lcqcl" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.420867 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54dee8cd-259a-4c9f-9e56-fbd0ea167f46-config\") pod \"route-controller-manager-cb449784d-bqprm\" (UID: \"54dee8cd-259a-4c9f-9e56-fbd0ea167f46\") " pod="openshift-route-controller-manager/route-controller-manager-cb449784d-bqprm" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.420883 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50b63435-18c4-4fcb-821c-3d88abc7b728-config\") pod \"controller-manager-5457c9f8cf-lcqcl\" (UID: \"50b63435-18c4-4fcb-821c-3d88abc7b728\") " pod="openshift-controller-manager/controller-manager-5457c9f8cf-lcqcl" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.420916 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mr8gn\" (UniqueName: \"kubernetes.io/projected/1afc0f4c-e02d-4a70-aaba-e761e8c04eee-kube-api-access-mr8gn\") on node \"crc\" DevicePath \"\"" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.420927 4856 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1afc0f4c-e02d-4a70-aaba-e761e8c04eee-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.420938 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1afc0f4c-e02d-4a70-aaba-e761e8c04eee-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.420947 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1afc0f4c-e02d-4a70-aaba-e761e8c04eee-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.420955 4856 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1afc0f4c-e02d-4a70-aaba-e761e8c04eee-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.428998 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.429842 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5457c9f8cf-lcqcl"] Jan 26 17:01:21 crc kubenswrapper[4856]: E0126 17:01:21.430194 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-4sjtb proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-5457c9f8cf-lcqcl" podUID="50b63435-18c4-4fcb-821c-3d88abc7b728" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.488176 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fpqvc"] Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.492891 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fpqvc"] Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.535395 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50b63435-18c4-4fcb-821c-3d88abc7b728-serving-cert\") pod \"controller-manager-5457c9f8cf-lcqcl\" (UID: \"50b63435-18c4-4fcb-821c-3d88abc7b728\") " pod="openshift-controller-manager/controller-manager-5457c9f8cf-lcqcl" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.535448 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54dee8cd-259a-4c9f-9e56-fbd0ea167f46-serving-cert\") pod \"route-controller-manager-cb449784d-bqprm\" (UID: \"54dee8cd-259a-4c9f-9e56-fbd0ea167f46\") " pod="openshift-route-controller-manager/route-controller-manager-cb449784d-bqprm" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.535482 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50b63435-18c4-4fcb-821c-3d88abc7b728-client-ca\") pod \"controller-manager-5457c9f8cf-lcqcl\" (UID: \"50b63435-18c4-4fcb-821c-3d88abc7b728\") " pod="openshift-controller-manager/controller-manager-5457c9f8cf-lcqcl" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.535547 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54dee8cd-259a-4c9f-9e56-fbd0ea167f46-config\") pod \"route-controller-manager-cb449784d-bqprm\" (UID: \"54dee8cd-259a-4c9f-9e56-fbd0ea167f46\") " pod="openshift-route-controller-manager/route-controller-manager-cb449784d-bqprm" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.535586 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50b63435-18c4-4fcb-821c-3d88abc7b728-config\") pod \"controller-manager-5457c9f8cf-lcqcl\" (UID: \"50b63435-18c4-4fcb-821c-3d88abc7b728\") " pod="openshift-controller-manager/controller-manager-5457c9f8cf-lcqcl" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.535630 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vhl4\" (UniqueName: \"kubernetes.io/projected/54dee8cd-259a-4c9f-9e56-fbd0ea167f46-kube-api-access-9vhl4\") pod \"route-controller-manager-cb449784d-bqprm\" (UID: \"54dee8cd-259a-4c9f-9e56-fbd0ea167f46\") " pod="openshift-route-controller-manager/route-controller-manager-cb449784d-bqprm" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.535664 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c71e219-35d7-4e1e-a371-3456dfd29e83-utilities\") pod \"redhat-operators-mqxwf\" (UID: \"9c71e219-35d7-4e1e-a371-3456dfd29e83\") " pod="openshift-marketplace/redhat-operators-mqxwf" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.535686 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sjtb\" (UniqueName: \"kubernetes.io/projected/50b63435-18c4-4fcb-821c-3d88abc7b728-kube-api-access-4sjtb\") pod \"controller-manager-5457c9f8cf-lcqcl\" (UID: \"50b63435-18c4-4fcb-821c-3d88abc7b728\") " pod="openshift-controller-manager/controller-manager-5457c9f8cf-lcqcl" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.535709 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c71e219-35d7-4e1e-a371-3456dfd29e83-catalog-content\") pod \"redhat-operators-mqxwf\" (UID: \"9c71e219-35d7-4e1e-a371-3456dfd29e83\") " pod="openshift-marketplace/redhat-operators-mqxwf" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.535727 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6swzw\" (UniqueName: \"kubernetes.io/projected/9c71e219-35d7-4e1e-a371-3456dfd29e83-kube-api-access-6swzw\") pod \"redhat-operators-mqxwf\" (UID: \"9c71e219-35d7-4e1e-a371-3456dfd29e83\") " pod="openshift-marketplace/redhat-operators-mqxwf" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.535754 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/50b63435-18c4-4fcb-821c-3d88abc7b728-proxy-ca-bundles\") pod \"controller-manager-5457c9f8cf-lcqcl\" (UID: \"50b63435-18c4-4fcb-821c-3d88abc7b728\") " pod="openshift-controller-manager/controller-manager-5457c9f8cf-lcqcl" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.535781 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54dee8cd-259a-4c9f-9e56-fbd0ea167f46-client-ca\") pod \"route-controller-manager-cb449784d-bqprm\" (UID: \"54dee8cd-259a-4c9f-9e56-fbd0ea167f46\") " pod="openshift-route-controller-manager/route-controller-manager-cb449784d-bqprm" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.536690 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c71e219-35d7-4e1e-a371-3456dfd29e83-utilities\") pod \"redhat-operators-mqxwf\" (UID: \"9c71e219-35d7-4e1e-a371-3456dfd29e83\") " pod="openshift-marketplace/redhat-operators-mqxwf" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.536960 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c71e219-35d7-4e1e-a371-3456dfd29e83-catalog-content\") pod \"redhat-operators-mqxwf\" (UID: \"9c71e219-35d7-4e1e-a371-3456dfd29e83\") " pod="openshift-marketplace/redhat-operators-mqxwf" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.537564 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50b63435-18c4-4fcb-821c-3d88abc7b728-client-ca\") pod \"controller-manager-5457c9f8cf-lcqcl\" (UID: \"50b63435-18c4-4fcb-821c-3d88abc7b728\") " pod="openshift-controller-manager/controller-manager-5457c9f8cf-lcqcl" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.538491 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50b63435-18c4-4fcb-821c-3d88abc7b728-config\") pod \"controller-manager-5457c9f8cf-lcqcl\" (UID: \"50b63435-18c4-4fcb-821c-3d88abc7b728\") " pod="openshift-controller-manager/controller-manager-5457c9f8cf-lcqcl" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.540582 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54dee8cd-259a-4c9f-9e56-fbd0ea167f46-config\") pod \"route-controller-manager-cb449784d-bqprm\" (UID: \"54dee8cd-259a-4c9f-9e56-fbd0ea167f46\") " pod="openshift-route-controller-manager/route-controller-manager-cb449784d-bqprm" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.540939 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54dee8cd-259a-4c9f-9e56-fbd0ea167f46-serving-cert\") pod \"route-controller-manager-cb449784d-bqprm\" (UID: \"54dee8cd-259a-4c9f-9e56-fbd0ea167f46\") " pod="openshift-route-controller-manager/route-controller-manager-cb449784d-bqprm" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.543100 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50b63435-18c4-4fcb-821c-3d88abc7b728-serving-cert\") pod \"controller-manager-5457c9f8cf-lcqcl\" (UID: \"50b63435-18c4-4fcb-821c-3d88abc7b728\") " pod="openshift-controller-manager/controller-manager-5457c9f8cf-lcqcl" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.543730 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/50b63435-18c4-4fcb-821c-3d88abc7b728-proxy-ca-bundles\") pod \"controller-manager-5457c9f8cf-lcqcl\" (UID: \"50b63435-18c4-4fcb-821c-3d88abc7b728\") " pod="openshift-controller-manager/controller-manager-5457c9f8cf-lcqcl" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.554024 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6swzw\" (UniqueName: \"kubernetes.io/projected/9c71e219-35d7-4e1e-a371-3456dfd29e83-kube-api-access-6swzw\") pod \"redhat-operators-mqxwf\" (UID: \"9c71e219-35d7-4e1e-a371-3456dfd29e83\") " pod="openshift-marketplace/redhat-operators-mqxwf" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.556991 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54dee8cd-259a-4c9f-9e56-fbd0ea167f46-client-ca\") pod \"route-controller-manager-cb449784d-bqprm\" (UID: \"54dee8cd-259a-4c9f-9e56-fbd0ea167f46\") " pod="openshift-route-controller-manager/route-controller-manager-cb449784d-bqprm" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.557843 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vhl4\" (UniqueName: \"kubernetes.io/projected/54dee8cd-259a-4c9f-9e56-fbd0ea167f46-kube-api-access-9vhl4\") pod \"route-controller-manager-cb449784d-bqprm\" (UID: \"54dee8cd-259a-4c9f-9e56-fbd0ea167f46\") " pod="openshift-route-controller-manager/route-controller-manager-cb449784d-bqprm" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.559242 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sjtb\" (UniqueName: \"kubernetes.io/projected/50b63435-18c4-4fcb-821c-3d88abc7b728-kube-api-access-4sjtb\") pod \"controller-manager-5457c9f8cf-lcqcl\" (UID: \"50b63435-18c4-4fcb-821c-3d88abc7b728\") " pod="openshift-controller-manager/controller-manager-5457c9f8cf-lcqcl" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.599149 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qknj9"] Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.616106 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cb449784d-bqprm" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.647485 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mqxwf" Jan 26 17:01:21 crc kubenswrapper[4856]: I0126 17:01:21.906266 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mqxwf"] Jan 26 17:01:21 crc kubenswrapper[4856]: W0126 17:01:21.971695 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c71e219_35d7_4e1e_a371_3456dfd29e83.slice/crio-d09c4604a24ed1fd63afc114569ecaa6c0c08542e351c04817bb0f8a62c19b49 WatchSource:0}: Error finding container d09c4604a24ed1fd63afc114569ecaa6c0c08542e351c04817bb0f8a62c19b49: Status 404 returned error can't find the container with id d09c4604a24ed1fd63afc114569ecaa6c0c08542e351c04817bb0f8a62c19b49 Jan 26 17:01:22 crc kubenswrapper[4856]: I0126 17:01:22.059682 4856 patch_prober.go:28] interesting pod/router-default-5444994796-h9b2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:01:22 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Jan 26 17:01:22 crc kubenswrapper[4856]: [+]process-running ok Jan 26 17:01:22 crc kubenswrapper[4856]: healthz check failed Jan 26 17:01:22 crc kubenswrapper[4856]: I0126 17:01:22.059738 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9b2g" podUID="85f05bd5-ff83-4d29-9531-ab3499088095" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:01:22 crc kubenswrapper[4856]: I0126 17:01:22.155413 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cb449784d-bqprm"] Jan 26 17:01:22 crc kubenswrapper[4856]: I0126 17:01:22.157612 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mqxwf" event={"ID":"9c71e219-35d7-4e1e-a371-3456dfd29e83","Type":"ContainerStarted","Data":"e6e9fc1c7474ee1cf14a50a96e79036f97d946d338d8a18c3434197cbd0438a8"} Jan 26 17:01:22 crc kubenswrapper[4856]: I0126 17:01:22.157653 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mqxwf" event={"ID":"9c71e219-35d7-4e1e-a371-3456dfd29e83","Type":"ContainerStarted","Data":"d09c4604a24ed1fd63afc114569ecaa6c0c08542e351c04817bb0f8a62c19b49"} Jan 26 17:01:22 crc kubenswrapper[4856]: I0126 17:01:22.160312 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lndnt" event={"ID":"1afc0f4c-e02d-4a70-aaba-e761e8c04eee","Type":"ContainerDied","Data":"96859d6a59b58c9df792a590deef50eb0ee923d03cb16fdc72abe3d18e466eaa"} Jan 26 17:01:22 crc kubenswrapper[4856]: I0126 17:01:22.160907 4856 scope.go:117] "RemoveContainer" containerID="e9e54e2a4a2266ca4148b11cb38df08f87c2f2ccd87dc3343d147862786c16e2" Jan 26 17:01:22 crc kubenswrapper[4856]: I0126 17:01:22.160324 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lndnt" Jan 26 17:01:22 crc kubenswrapper[4856]: I0126 17:01:22.164206 4856 generic.go:334] "Generic (PLEG): container finished" podID="a3fa94fe-e4ad-4171-b853-89878dc61569" containerID="6c718aeedef34f07c2686370f8f78fe4060881e116396cc02bb806370cffdb47" exitCode=0 Jan 26 17:01:22 crc kubenswrapper[4856]: I0126 17:01:22.164308 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qknj9" event={"ID":"a3fa94fe-e4ad-4171-b853-89878dc61569","Type":"ContainerDied","Data":"6c718aeedef34f07c2686370f8f78fe4060881e116396cc02bb806370cffdb47"} Jan 26 17:01:22 crc kubenswrapper[4856]: I0126 17:01:22.164342 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qknj9" event={"ID":"a3fa94fe-e4ad-4171-b853-89878dc61569","Type":"ContainerStarted","Data":"083b0b52d78f857657f62965a6b3636eba0ff933ac74b23de919043206cf9046"} Jan 26 17:01:22 crc kubenswrapper[4856]: I0126 17:01:22.171479 4856 generic.go:334] "Generic (PLEG): container finished" podID="7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7" containerID="655c350d2621ac99cae47d6117abe996be96564e1734dccd0a74e6f8446d8e6d" exitCode=0 Jan 26 17:01:22 crc kubenswrapper[4856]: I0126 17:01:22.172354 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-8q6q4" event={"ID":"7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7","Type":"ContainerDied","Data":"655c350d2621ac99cae47d6117abe996be96564e1734dccd0a74e6f8446d8e6d"} Jan 26 17:01:22 crc kubenswrapper[4856]: I0126 17:01:22.172493 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5457c9f8cf-lcqcl" Jan 26 17:01:22 crc kubenswrapper[4856]: W0126 17:01:22.201807 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54dee8cd_259a_4c9f_9e56_fbd0ea167f46.slice/crio-960ff00bedf28636eb04c4f352e2d6d2e33a5ceb9800e901e018103cd5ac5859 WatchSource:0}: Error finding container 960ff00bedf28636eb04c4f352e2d6d2e33a5ceb9800e901e018103cd5ac5859: Status 404 returned error can't find the container with id 960ff00bedf28636eb04c4f352e2d6d2e33a5ceb9800e901e018103cd5ac5859 Jan 26 17:01:22 crc kubenswrapper[4856]: I0126 17:01:22.219045 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5457c9f8cf-lcqcl" Jan 26 17:01:22 crc kubenswrapper[4856]: I0126 17:01:22.236418 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lndnt"] Jan 26 17:01:22 crc kubenswrapper[4856]: I0126 17:01:22.240339 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lndnt"] Jan 26 17:01:22 crc kubenswrapper[4856]: I0126 17:01:22.348003 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50b63435-18c4-4fcb-821c-3d88abc7b728-client-ca\") pod \"50b63435-18c4-4fcb-821c-3d88abc7b728\" (UID: \"50b63435-18c4-4fcb-821c-3d88abc7b728\") " Jan 26 17:01:22 crc kubenswrapper[4856]: I0126 17:01:22.348636 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sjtb\" (UniqueName: \"kubernetes.io/projected/50b63435-18c4-4fcb-821c-3d88abc7b728-kube-api-access-4sjtb\") pod \"50b63435-18c4-4fcb-821c-3d88abc7b728\" (UID: \"50b63435-18c4-4fcb-821c-3d88abc7b728\") " Jan 26 17:01:22 crc kubenswrapper[4856]: I0126 17:01:22.348731 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/50b63435-18c4-4fcb-821c-3d88abc7b728-proxy-ca-bundles\") pod \"50b63435-18c4-4fcb-821c-3d88abc7b728\" (UID: \"50b63435-18c4-4fcb-821c-3d88abc7b728\") " Jan 26 17:01:22 crc kubenswrapper[4856]: I0126 17:01:22.348750 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50b63435-18c4-4fcb-821c-3d88abc7b728-config\") pod \"50b63435-18c4-4fcb-821c-3d88abc7b728\" (UID: \"50b63435-18c4-4fcb-821c-3d88abc7b728\") " Jan 26 17:01:22 crc kubenswrapper[4856]: I0126 17:01:22.348730 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50b63435-18c4-4fcb-821c-3d88abc7b728-client-ca" (OuterVolumeSpecName: "client-ca") pod "50b63435-18c4-4fcb-821c-3d88abc7b728" (UID: "50b63435-18c4-4fcb-821c-3d88abc7b728"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:01:22 crc kubenswrapper[4856]: I0126 17:01:22.348771 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50b63435-18c4-4fcb-821c-3d88abc7b728-serving-cert\") pod \"50b63435-18c4-4fcb-821c-3d88abc7b728\" (UID: \"50b63435-18c4-4fcb-821c-3d88abc7b728\") " Jan 26 17:01:22 crc kubenswrapper[4856]: I0126 17:01:22.349317 4856 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50b63435-18c4-4fcb-821c-3d88abc7b728-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 17:01:22 crc kubenswrapper[4856]: I0126 17:01:22.349333 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50b63435-18c4-4fcb-821c-3d88abc7b728-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "50b63435-18c4-4fcb-821c-3d88abc7b728" (UID: "50b63435-18c4-4fcb-821c-3d88abc7b728"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:01:22 crc kubenswrapper[4856]: I0126 17:01:22.352259 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50b63435-18c4-4fcb-821c-3d88abc7b728-config" (OuterVolumeSpecName: "config") pod "50b63435-18c4-4fcb-821c-3d88abc7b728" (UID: "50b63435-18c4-4fcb-821c-3d88abc7b728"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:01:22 crc kubenswrapper[4856]: I0126 17:01:22.364718 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50b63435-18c4-4fcb-821c-3d88abc7b728-kube-api-access-4sjtb" (OuterVolumeSpecName: "kube-api-access-4sjtb") pod "50b63435-18c4-4fcb-821c-3d88abc7b728" (UID: "50b63435-18c4-4fcb-821c-3d88abc7b728"). InnerVolumeSpecName "kube-api-access-4sjtb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:01:22 crc kubenswrapper[4856]: I0126 17:01:22.365042 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50b63435-18c4-4fcb-821c-3d88abc7b728-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "50b63435-18c4-4fcb-821c-3d88abc7b728" (UID: "50b63435-18c4-4fcb-821c-3d88abc7b728"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:01:22 crc kubenswrapper[4856]: I0126 17:01:22.450707 4856 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/50b63435-18c4-4fcb-821c-3d88abc7b728-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 17:01:22 crc kubenswrapper[4856]: I0126 17:01:22.450745 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50b63435-18c4-4fcb-821c-3d88abc7b728-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:01:22 crc kubenswrapper[4856]: I0126 17:01:22.450755 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50b63435-18c4-4fcb-821c-3d88abc7b728-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:01:22 crc kubenswrapper[4856]: I0126 17:01:22.450765 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sjtb\" (UniqueName: \"kubernetes.io/projected/50b63435-18c4-4fcb-821c-3d88abc7b728-kube-api-access-4sjtb\") on node \"crc\" DevicePath \"\"" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.060804 4856 patch_prober.go:28] interesting pod/router-default-5444994796-h9b2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:01:23 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Jan 26 17:01:23 crc kubenswrapper[4856]: [+]process-running ok Jan 26 17:01:23 crc kubenswrapper[4856]: healthz check failed Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.060868 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9b2g" podUID="85f05bd5-ff83-4d29-9531-ab3499088095" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.227033 4856 generic.go:334] "Generic (PLEG): container finished" podID="9c71e219-35d7-4e1e-a371-3456dfd29e83" containerID="e6e9fc1c7474ee1cf14a50a96e79036f97d946d338d8a18c3434197cbd0438a8" exitCode=0 Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.227161 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mqxwf" event={"ID":"9c71e219-35d7-4e1e-a371-3456dfd29e83","Type":"ContainerDied","Data":"e6e9fc1c7474ee1cf14a50a96e79036f97d946d338d8a18c3434197cbd0438a8"} Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.234868 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cb449784d-bqprm" event={"ID":"54dee8cd-259a-4c9f-9e56-fbd0ea167f46","Type":"ContainerStarted","Data":"3157d6dd787fe30eefd4db24c3f3619f52444746080c65c699eab5d0d02ab52a"} Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.234926 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cb449784d-bqprm" event={"ID":"54dee8cd-259a-4c9f-9e56-fbd0ea167f46","Type":"ContainerStarted","Data":"960ff00bedf28636eb04c4f352e2d6d2e33a5ceb9800e901e018103cd5ac5859"} Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.235884 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-cb449784d-bqprm" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.249988 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5457c9f8cf-lcqcl" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.261918 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-cb449784d-bqprm" podStartSLOduration=4.261887795 podStartE2EDuration="4.261887795s" podCreationTimestamp="2026-01-26 17:01:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:01:23.259125884 +0000 UTC m=+179.212379885" watchObservedRunningTime="2026-01-26 17:01:23.261887795 +0000 UTC m=+179.215141776" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.320994 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-cb449784d-bqprm" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.322367 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-54b988dd69-ljwqg"] Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.328928 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5457c9f8cf-lcqcl"] Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.329035 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54b988dd69-ljwqg" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.334045 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5457c9f8cf-lcqcl"] Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.334599 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.334785 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.335327 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.336912 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.337547 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.337724 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.339717 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-54b988dd69-ljwqg"] Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.343272 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.439307 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1afc0f4c-e02d-4a70-aaba-e761e8c04eee" path="/var/lib/kubelet/pods/1afc0f4c-e02d-4a70-aaba-e761e8c04eee/volumes" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.440352 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50b63435-18c4-4fcb-821c-3d88abc7b728" path="/var/lib/kubelet/pods/50b63435-18c4-4fcb-821c-3d88abc7b728/volumes" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.443123 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe6baed-ab97-4d8a-8be2-6f00f9698136" path="/var/lib/kubelet/pods/5fe6baed-ab97-4d8a-8be2-6f00f9698136/volumes" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.476025 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de17aec3-fab1-4a5e-bd46-6a1545b93a89-client-ca\") pod \"controller-manager-54b988dd69-ljwqg\" (UID: \"de17aec3-fab1-4a5e-bd46-6a1545b93a89\") " pod="openshift-controller-manager/controller-manager-54b988dd69-ljwqg" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.476077 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de17aec3-fab1-4a5e-bd46-6a1545b93a89-proxy-ca-bundles\") pod \"controller-manager-54b988dd69-ljwqg\" (UID: \"de17aec3-fab1-4a5e-bd46-6a1545b93a89\") " pod="openshift-controller-manager/controller-manager-54b988dd69-ljwqg" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.476108 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de17aec3-fab1-4a5e-bd46-6a1545b93a89-config\") pod \"controller-manager-54b988dd69-ljwqg\" (UID: \"de17aec3-fab1-4a5e-bd46-6a1545b93a89\") " pod="openshift-controller-manager/controller-manager-54b988dd69-ljwqg" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.476146 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de17aec3-fab1-4a5e-bd46-6a1545b93a89-serving-cert\") pod \"controller-manager-54b988dd69-ljwqg\" (UID: \"de17aec3-fab1-4a5e-bd46-6a1545b93a89\") " pod="openshift-controller-manager/controller-manager-54b988dd69-ljwqg" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.476173 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxzpz\" (UniqueName: \"kubernetes.io/projected/de17aec3-fab1-4a5e-bd46-6a1545b93a89-kube-api-access-sxzpz\") pod \"controller-manager-54b988dd69-ljwqg\" (UID: \"de17aec3-fab1-4a5e-bd46-6a1545b93a89\") " pod="openshift-controller-manager/controller-manager-54b988dd69-ljwqg" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.577458 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de17aec3-fab1-4a5e-bd46-6a1545b93a89-serving-cert\") pod \"controller-manager-54b988dd69-ljwqg\" (UID: \"de17aec3-fab1-4a5e-bd46-6a1545b93a89\") " pod="openshift-controller-manager/controller-manager-54b988dd69-ljwqg" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.577517 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxzpz\" (UniqueName: \"kubernetes.io/projected/de17aec3-fab1-4a5e-bd46-6a1545b93a89-kube-api-access-sxzpz\") pod \"controller-manager-54b988dd69-ljwqg\" (UID: \"de17aec3-fab1-4a5e-bd46-6a1545b93a89\") " pod="openshift-controller-manager/controller-manager-54b988dd69-ljwqg" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.577581 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de17aec3-fab1-4a5e-bd46-6a1545b93a89-client-ca\") pod \"controller-manager-54b988dd69-ljwqg\" (UID: \"de17aec3-fab1-4a5e-bd46-6a1545b93a89\") " pod="openshift-controller-manager/controller-manager-54b988dd69-ljwqg" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.577604 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de17aec3-fab1-4a5e-bd46-6a1545b93a89-proxy-ca-bundles\") pod \"controller-manager-54b988dd69-ljwqg\" (UID: \"de17aec3-fab1-4a5e-bd46-6a1545b93a89\") " pod="openshift-controller-manager/controller-manager-54b988dd69-ljwqg" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.577634 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de17aec3-fab1-4a5e-bd46-6a1545b93a89-config\") pod \"controller-manager-54b988dd69-ljwqg\" (UID: \"de17aec3-fab1-4a5e-bd46-6a1545b93a89\") " pod="openshift-controller-manager/controller-manager-54b988dd69-ljwqg" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.578848 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de17aec3-fab1-4a5e-bd46-6a1545b93a89-config\") pod \"controller-manager-54b988dd69-ljwqg\" (UID: \"de17aec3-fab1-4a5e-bd46-6a1545b93a89\") " pod="openshift-controller-manager/controller-manager-54b988dd69-ljwqg" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.580394 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de17aec3-fab1-4a5e-bd46-6a1545b93a89-client-ca\") pod \"controller-manager-54b988dd69-ljwqg\" (UID: \"de17aec3-fab1-4a5e-bd46-6a1545b93a89\") " pod="openshift-controller-manager/controller-manager-54b988dd69-ljwqg" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.580699 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de17aec3-fab1-4a5e-bd46-6a1545b93a89-proxy-ca-bundles\") pod \"controller-manager-54b988dd69-ljwqg\" (UID: \"de17aec3-fab1-4a5e-bd46-6a1545b93a89\") " pod="openshift-controller-manager/controller-manager-54b988dd69-ljwqg" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.582984 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de17aec3-fab1-4a5e-bd46-6a1545b93a89-serving-cert\") pod \"controller-manager-54b988dd69-ljwqg\" (UID: \"de17aec3-fab1-4a5e-bd46-6a1545b93a89\") " pod="openshift-controller-manager/controller-manager-54b988dd69-ljwqg" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.597885 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxzpz\" (UniqueName: \"kubernetes.io/projected/de17aec3-fab1-4a5e-bd46-6a1545b93a89-kube-api-access-sxzpz\") pod \"controller-manager-54b988dd69-ljwqg\" (UID: \"de17aec3-fab1-4a5e-bd46-6a1545b93a89\") " pod="openshift-controller-manager/controller-manager-54b988dd69-ljwqg" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.652173 4856 patch_prober.go:28] interesting pod/console-f9d7485db-6qgnn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.26:8443/health\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.652248 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-6qgnn" podUID="b28404ed-2e71-4b3f-9140-35ee89dbc8f2" containerName="console" probeResult="failure" output="Get \"https://10.217.0.26:8443/health\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.669434 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54b988dd69-ljwqg" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.694471 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-8q6q4" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.704267 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.709828 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-6rlxp" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.852843 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-7l927 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.852876 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-7l927 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.852925 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7l927" podUID="94291fa4-24a5-499e-8143-89c8784d9284" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.852925 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-7l927" podUID="94291fa4-24a5-499e-8143-89c8784d9284" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.852995 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-7l927" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.853570 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-7l927 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.853595 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7l927" podUID="94291fa4-24a5-499e-8143-89c8784d9284" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.853677 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"d13c7142b05ed798c0e5b16508a221e2918021dbec60054995ac94f05ffdad09"} pod="openshift-console/downloads-7954f5f757-7l927" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.853721 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-7l927" podUID="94291fa4-24a5-499e-8143-89c8784d9284" containerName="download-server" containerID="cri-o://d13c7142b05ed798c0e5b16508a221e2918021dbec60054995ac94f05ffdad09" gracePeriod=2 Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.884953 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxgfj\" (UniqueName: \"kubernetes.io/projected/7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7-kube-api-access-kxgfj\") pod \"7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7\" (UID: \"7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7\") " Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.885019 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7-config-volume\") pod \"7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7\" (UID: \"7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7\") " Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.885080 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7-secret-volume\") pod \"7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7\" (UID: \"7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7\") " Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.890129 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7-config-volume" (OuterVolumeSpecName: "config-volume") pod "7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7" (UID: "7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.893738 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.895650 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7-kube-api-access-kxgfj" (OuterVolumeSpecName: "kube-api-access-kxgfj") pod "7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7" (UID: "7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7"). InnerVolumeSpecName "kube-api-access-kxgfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.905246 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-4pbj2" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.906117 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7" (UID: "7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.987066 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxgfj\" (UniqueName: \"kubernetes.io/projected/7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7-kube-api-access-kxgfj\") on node \"crc\" DevicePath \"\"" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.987096 4856 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 17:01:23 crc kubenswrapper[4856]: I0126 17:01:23.987105 4856 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 17:01:24 crc kubenswrapper[4856]: I0126 17:01:24.069437 4856 patch_prober.go:28] interesting pod/router-default-5444994796-h9b2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:01:24 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Jan 26 17:01:24 crc kubenswrapper[4856]: [+]process-running ok Jan 26 17:01:24 crc kubenswrapper[4856]: healthz check failed Jan 26 17:01:24 crc kubenswrapper[4856]: I0126 17:01:24.069496 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9b2g" podUID="85f05bd5-ff83-4d29-9531-ab3499088095" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:01:24 crc kubenswrapper[4856]: I0126 17:01:24.181849 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-wvttb" Jan 26 17:01:24 crc kubenswrapper[4856]: I0126 17:01:24.250684 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-54b988dd69-ljwqg"] Jan 26 17:01:24 crc kubenswrapper[4856]: I0126 17:01:24.328697 4856 generic.go:334] "Generic (PLEG): container finished" podID="94291fa4-24a5-499e-8143-89c8784d9284" containerID="d13c7142b05ed798c0e5b16508a221e2918021dbec60054995ac94f05ffdad09" exitCode=0 Jan 26 17:01:24 crc kubenswrapper[4856]: I0126 17:01:24.329055 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-7l927" event={"ID":"94291fa4-24a5-499e-8143-89c8784d9284","Type":"ContainerDied","Data":"d13c7142b05ed798c0e5b16508a221e2918021dbec60054995ac94f05ffdad09"} Jan 26 17:01:24 crc kubenswrapper[4856]: I0126 17:01:24.337029 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-8q6q4" Jan 26 17:01:24 crc kubenswrapper[4856]: I0126 17:01:24.345985 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-8q6q4" event={"ID":"7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7","Type":"ContainerDied","Data":"d8fe561f33f411cab54065acf50663e1fea5f5209ab612f88976297cc920acef"} Jan 26 17:01:24 crc kubenswrapper[4856]: I0126 17:01:24.346127 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8fe561f33f411cab54065acf50663e1fea5f5209ab612f88976297cc920acef" Jan 26 17:01:25 crc kubenswrapper[4856]: I0126 17:01:25.060044 4856 patch_prober.go:28] interesting pod/router-default-5444994796-h9b2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:01:25 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Jan 26 17:01:25 crc kubenswrapper[4856]: [+]process-running ok Jan 26 17:01:25 crc kubenswrapper[4856]: healthz check failed Jan 26 17:01:25 crc kubenswrapper[4856]: I0126 17:01:25.060156 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9b2g" podUID="85f05bd5-ff83-4d29-9531-ab3499088095" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:01:25 crc kubenswrapper[4856]: I0126 17:01:25.426786 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-7l927 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 26 17:01:25 crc kubenswrapper[4856]: I0126 17:01:25.427077 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7l927" podUID="94291fa4-24a5-499e-8143-89c8784d9284" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 26 17:01:25 crc kubenswrapper[4856]: I0126 17:01:25.447500 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-7l927" event={"ID":"94291fa4-24a5-499e-8143-89c8784d9284","Type":"ContainerStarted","Data":"1a3c8f728acaa63fa83450974ecec3e1e03ae7c892f5036f7c7f018fe224588c"} Jan 26 17:01:25 crc kubenswrapper[4856]: I0126 17:01:25.447591 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-7l927" Jan 26 17:01:25 crc kubenswrapper[4856]: I0126 17:01:25.463548 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54b988dd69-ljwqg" event={"ID":"de17aec3-fab1-4a5e-bd46-6a1545b93a89","Type":"ContainerStarted","Data":"5dd5f652c6d735efc7f4d83a862afc4f09c88afb0921e6ecb304af76698cc9c8"} Jan 26 17:01:25 crc kubenswrapper[4856]: I0126 17:01:25.463591 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54b988dd69-ljwqg" event={"ID":"de17aec3-fab1-4a5e-bd46-6a1545b93a89","Type":"ContainerStarted","Data":"1bccd3720f328e7d0b92fc36bcc35726a97ebe6a8070f5cbb1608de57071e2d0"} Jan 26 17:01:25 crc kubenswrapper[4856]: I0126 17:01:25.463610 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-54b988dd69-ljwqg" Jan 26 17:01:25 crc kubenswrapper[4856]: I0126 17:01:25.472359 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-54b988dd69-ljwqg" Jan 26 17:01:26 crc kubenswrapper[4856]: I0126 17:01:26.059974 4856 patch_prober.go:28] interesting pod/router-default-5444994796-h9b2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:01:26 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Jan 26 17:01:26 crc kubenswrapper[4856]: [+]process-running ok Jan 26 17:01:26 crc kubenswrapper[4856]: healthz check failed Jan 26 17:01:26 crc kubenswrapper[4856]: I0126 17:01:26.060360 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9b2g" podUID="85f05bd5-ff83-4d29-9531-ab3499088095" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:01:26 crc kubenswrapper[4856]: I0126 17:01:26.471435 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-7l927 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 26 17:01:26 crc kubenswrapper[4856]: I0126 17:01:26.471498 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7l927" podUID="94291fa4-24a5-499e-8143-89c8784d9284" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 26 17:01:26 crc kubenswrapper[4856]: I0126 17:01:26.938413 4856 patch_prober.go:28] interesting pod/machine-config-daemon-xm9cq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:01:26 crc kubenswrapper[4856]: I0126 17:01:26.938478 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:01:27 crc kubenswrapper[4856]: I0126 17:01:27.058701 4856 patch_prober.go:28] interesting pod/router-default-5444994796-h9b2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:01:27 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Jan 26 17:01:27 crc kubenswrapper[4856]: [+]process-running ok Jan 26 17:01:27 crc kubenswrapper[4856]: healthz check failed Jan 26 17:01:27 crc kubenswrapper[4856]: I0126 17:01:27.058760 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9b2g" podUID="85f05bd5-ff83-4d29-9531-ab3499088095" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:01:28 crc kubenswrapper[4856]: I0126 17:01:28.064226 4856 patch_prober.go:28] interesting pod/router-default-5444994796-h9b2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:01:28 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Jan 26 17:01:28 crc kubenswrapper[4856]: [+]process-running ok Jan 26 17:01:28 crc kubenswrapper[4856]: healthz check failed Jan 26 17:01:28 crc kubenswrapper[4856]: I0126 17:01:28.064874 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9b2g" podUID="85f05bd5-ff83-4d29-9531-ab3499088095" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:01:29 crc kubenswrapper[4856]: I0126 17:01:29.562093 4856 patch_prober.go:28] interesting pod/router-default-5444994796-h9b2g container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 17:01:29 crc kubenswrapper[4856]: [+]has-synced ok Jan 26 17:01:29 crc kubenswrapper[4856]: [+]process-running ok Jan 26 17:01:29 crc kubenswrapper[4856]: healthz check failed Jan 26 17:01:29 crc kubenswrapper[4856]: I0126 17:01:29.562177 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9b2g" podUID="85f05bd5-ff83-4d29-9531-ab3499088095" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 17:01:30 crc kubenswrapper[4856]: I0126 17:01:30.057284 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-h9b2g" Jan 26 17:01:30 crc kubenswrapper[4856]: I0126 17:01:30.060416 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-h9b2g" Jan 26 17:01:30 crc kubenswrapper[4856]: I0126 17:01:30.082325 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-54b988dd69-ljwqg" podStartSLOduration=9.082308566 podStartE2EDuration="9.082308566s" podCreationTimestamp="2026-01-26 17:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:01:25.534987137 +0000 UTC m=+181.488241118" watchObservedRunningTime="2026-01-26 17:01:30.082308566 +0000 UTC m=+186.035562537" Jan 26 17:01:33 crc kubenswrapper[4856]: I0126 17:01:33.858941 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-7l927" Jan 26 17:01:34 crc kubenswrapper[4856]: I0126 17:01:34.152941 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8m4l6" Jan 26 17:01:35 crc kubenswrapper[4856]: I0126 17:01:35.095759 4856 patch_prober.go:28] interesting pod/router-default-5444994796-h9b2g container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 17:01:35 crc kubenswrapper[4856]: I0126 17:01:35.095929 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-h9b2g" podUID="85f05bd5-ff83-4d29-9531-ab3499088095" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:01:35 crc kubenswrapper[4856]: I0126 17:01:35.109704 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-6qgnn" Jan 26 17:01:35 crc kubenswrapper[4856]: I0126 17:01:35.114211 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-6qgnn" Jan 26 17:01:39 crc kubenswrapper[4856]: I0126 17:01:39.428604 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:01:43 crc kubenswrapper[4856]: I0126 17:01:43.007808 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 17:01:54 crc kubenswrapper[4856]: I0126 17:01:54.059150 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 17:01:54 crc kubenswrapper[4856]: E0126 17:01:54.059877 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7" containerName="collect-profiles" Jan 26 17:01:54 crc kubenswrapper[4856]: I0126 17:01:54.059890 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7" containerName="collect-profiles" Jan 26 17:01:54 crc kubenswrapper[4856]: I0126 17:01:54.059996 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e9ee376-b7c7-4b6a-91b3-cc86a3a02dc7" containerName="collect-profiles" Jan 26 17:01:54 crc kubenswrapper[4856]: I0126 17:01:54.060355 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 17:01:54 crc kubenswrapper[4856]: I0126 17:01:54.067915 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 17:01:54 crc kubenswrapper[4856]: I0126 17:01:54.068938 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 17:01:54 crc kubenswrapper[4856]: I0126 17:01:54.072874 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 17:01:54 crc kubenswrapper[4856]: I0126 17:01:54.073175 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/55a1283a-85e0-497f-8c5d-9a28168cb810-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"55a1283a-85e0-497f-8c5d-9a28168cb810\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 17:01:54 crc kubenswrapper[4856]: I0126 17:01:54.073239 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55a1283a-85e0-497f-8c5d-9a28168cb810-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"55a1283a-85e0-497f-8c5d-9a28168cb810\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 17:01:54 crc kubenswrapper[4856]: I0126 17:01:54.174039 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/55a1283a-85e0-497f-8c5d-9a28168cb810-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"55a1283a-85e0-497f-8c5d-9a28168cb810\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 17:01:54 crc kubenswrapper[4856]: I0126 17:01:54.174129 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55a1283a-85e0-497f-8c5d-9a28168cb810-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"55a1283a-85e0-497f-8c5d-9a28168cb810\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 17:01:54 crc kubenswrapper[4856]: I0126 17:01:54.174210 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/55a1283a-85e0-497f-8c5d-9a28168cb810-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"55a1283a-85e0-497f-8c5d-9a28168cb810\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 17:01:54 crc kubenswrapper[4856]: I0126 17:01:54.196116 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55a1283a-85e0-497f-8c5d-9a28168cb810-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"55a1283a-85e0-497f-8c5d-9a28168cb810\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 17:01:54 crc kubenswrapper[4856]: I0126 17:01:54.388582 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 17:01:56 crc kubenswrapper[4856]: I0126 17:01:56.939109 4856 patch_prober.go:28] interesting pod/machine-config-daemon-xm9cq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:01:56 crc kubenswrapper[4856]: I0126 17:01:56.939435 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:01:58 crc kubenswrapper[4856]: I0126 17:01:58.461873 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 17:01:58 crc kubenswrapper[4856]: I0126 17:01:58.466901 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 17:01:58 crc kubenswrapper[4856]: I0126 17:01:58.471180 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 17:01:58 crc kubenswrapper[4856]: I0126 17:01:58.532670 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69379820-3062-4964-a8dd-8689f8cea38d-kube-api-access\") pod \"installer-9-crc\" (UID: \"69379820-3062-4964-a8dd-8689f8cea38d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 17:01:58 crc kubenswrapper[4856]: I0126 17:01:58.532727 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/69379820-3062-4964-a8dd-8689f8cea38d-var-lock\") pod \"installer-9-crc\" (UID: \"69379820-3062-4964-a8dd-8689f8cea38d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 17:01:58 crc kubenswrapper[4856]: I0126 17:01:58.532792 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69379820-3062-4964-a8dd-8689f8cea38d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"69379820-3062-4964-a8dd-8689f8cea38d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 17:01:58 crc kubenswrapper[4856]: I0126 17:01:58.634041 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/69379820-3062-4964-a8dd-8689f8cea38d-var-lock\") pod \"installer-9-crc\" (UID: \"69379820-3062-4964-a8dd-8689f8cea38d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 17:01:58 crc kubenswrapper[4856]: I0126 17:01:58.634146 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69379820-3062-4964-a8dd-8689f8cea38d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"69379820-3062-4964-a8dd-8689f8cea38d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 17:01:58 crc kubenswrapper[4856]: I0126 17:01:58.634201 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/69379820-3062-4964-a8dd-8689f8cea38d-var-lock\") pod \"installer-9-crc\" (UID: \"69379820-3062-4964-a8dd-8689f8cea38d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 17:01:58 crc kubenswrapper[4856]: I0126 17:01:58.634237 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69379820-3062-4964-a8dd-8689f8cea38d-kube-api-access\") pod \"installer-9-crc\" (UID: \"69379820-3062-4964-a8dd-8689f8cea38d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 17:01:58 crc kubenswrapper[4856]: I0126 17:01:58.634366 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69379820-3062-4964-a8dd-8689f8cea38d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"69379820-3062-4964-a8dd-8689f8cea38d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 17:01:58 crc kubenswrapper[4856]: I0126 17:01:58.654018 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69379820-3062-4964-a8dd-8689f8cea38d-kube-api-access\") pod \"installer-9-crc\" (UID: \"69379820-3062-4964-a8dd-8689f8cea38d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 17:01:58 crc kubenswrapper[4856]: I0126 17:01:58.802979 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 17:02:15 crc kubenswrapper[4856]: E0126 17:02:15.160269 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 17:02:15 crc kubenswrapper[4856]: E0126 17:02:15.161834 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tf49w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-txmdl_openshift-marketplace(40a27476-22b1-4083-990e-66e70ccdaf4c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 17:02:15 crc kubenswrapper[4856]: E0126 17:02:15.163091 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-txmdl" podUID="40a27476-22b1-4083-990e-66e70ccdaf4c" Jan 26 17:02:16 crc kubenswrapper[4856]: E0126 17:02:16.589664 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-txmdl" podUID="40a27476-22b1-4083-990e-66e70ccdaf4c" Jan 26 17:02:16 crc kubenswrapper[4856]: E0126 17:02:16.658808 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 26 17:02:16 crc kubenswrapper[4856]: E0126 17:02:16.659000 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2qzgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-4kwt4_openshift-marketplace(d6944fc9-b8d7-4013-8702-b5765c410a0b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 17:02:16 crc kubenswrapper[4856]: E0126 17:02:16.660254 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-4kwt4" podUID="d6944fc9-b8d7-4013-8702-b5765c410a0b" Jan 26 17:02:16 crc kubenswrapper[4856]: E0126 17:02:16.687771 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 17:02:16 crc kubenswrapper[4856]: E0126 17:02:16.688070 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s92lp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-62nhd_openshift-marketplace(7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 17:02:16 crc kubenswrapper[4856]: E0126 17:02:16.689273 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-62nhd" podUID="7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766" Jan 26 17:02:20 crc kubenswrapper[4856]: E0126 17:02:20.666250 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4kwt4" podUID="d6944fc9-b8d7-4013-8702-b5765c410a0b" Jan 26 17:02:20 crc kubenswrapper[4856]: E0126 17:02:20.666308 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-62nhd" podUID="7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766" Jan 26 17:02:20 crc kubenswrapper[4856]: E0126 17:02:20.769112 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 26 17:02:20 crc kubenswrapper[4856]: E0126 17:02:20.769303 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wvpzb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-qknj9_openshift-marketplace(a3fa94fe-e4ad-4171-b853-89878dc61569): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 17:02:20 crc kubenswrapper[4856]: E0126 17:02:20.770485 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-qknj9" podUID="a3fa94fe-e4ad-4171-b853-89878dc61569" Jan 26 17:02:22 crc kubenswrapper[4856]: E0126 17:02:22.116363 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-qknj9" podUID="a3fa94fe-e4ad-4171-b853-89878dc61569" Jan 26 17:02:22 crc kubenswrapper[4856]: E0126 17:02:22.190746 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 26 17:02:22 crc kubenswrapper[4856]: E0126 17:02:22.190909 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x7mvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-n8hp2_openshift-marketplace(a6086d4b-faeb-4a12-8e6a-2a178dfe374c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 17:02:22 crc kubenswrapper[4856]: E0126 17:02:22.192096 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-n8hp2" podUID="a6086d4b-faeb-4a12-8e6a-2a178dfe374c" Jan 26 17:02:22 crc kubenswrapper[4856]: E0126 17:02:22.203986 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 26 17:02:22 crc kubenswrapper[4856]: E0126 17:02:22.204442 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dcbvc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-g8bgt_openshift-marketplace(0d7eb7b8-63ae-493a-850b-0b9f3b42e927): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 17:02:22 crc kubenswrapper[4856]: E0126 17:02:22.205610 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-g8bgt" podUID="0d7eb7b8-63ae-493a-850b-0b9f3b42e927" Jan 26 17:02:22 crc kubenswrapper[4856]: E0126 17:02:22.234717 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 26 17:02:22 crc kubenswrapper[4856]: E0126 17:02:22.234931 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6swzw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-mqxwf_openshift-marketplace(9c71e219-35d7-4e1e-a371-3456dfd29e83): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 17:02:22 crc kubenswrapper[4856]: E0126 17:02:22.239436 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-mqxwf" podUID="9c71e219-35d7-4e1e-a371-3456dfd29e83" Jan 26 17:02:22 crc kubenswrapper[4856]: E0126 17:02:22.252702 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 26 17:02:22 crc kubenswrapper[4856]: E0126 17:02:22.253142 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gtxjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-qgjjd_openshift-marketplace(89cf05de-642b-4574-9f79-45e7a3d4afa3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 17:02:22 crc kubenswrapper[4856]: E0126 17:02:22.254322 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-qgjjd" podUID="89cf05de-642b-4574-9f79-45e7a3d4afa3" Jan 26 17:02:22 crc kubenswrapper[4856]: I0126 17:02:22.528874 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 17:02:22 crc kubenswrapper[4856]: I0126 17:02:22.592163 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 17:02:22 crc kubenswrapper[4856]: W0126 17:02:22.607358 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod69379820_3062_4964_a8dd_8689f8cea38d.slice/crio-af60ab5d4a2b57ad1bbcc4a879fdc9dce5f1b3ef1e2f5eb96e13241cdf6f2277 WatchSource:0}: Error finding container af60ab5d4a2b57ad1bbcc4a879fdc9dce5f1b3ef1e2f5eb96e13241cdf6f2277: Status 404 returned error can't find the container with id af60ab5d4a2b57ad1bbcc4a879fdc9dce5f1b3ef1e2f5eb96e13241cdf6f2277 Jan 26 17:02:22 crc kubenswrapper[4856]: I0126 17:02:22.948031 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"55a1283a-85e0-497f-8c5d-9a28168cb810","Type":"ContainerStarted","Data":"7d4c917e0830be1aeb1e35a23c1be0bcb5487e9bceffc19d68bcd09d48f247ad"} Jan 26 17:02:22 crc kubenswrapper[4856]: I0126 17:02:22.948400 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"55a1283a-85e0-497f-8c5d-9a28168cb810","Type":"ContainerStarted","Data":"c8b87a1aff72cc4fdfed70fd560545e58b33f1c048b47cb99c85517d64eba518"} Jan 26 17:02:22 crc kubenswrapper[4856]: I0126 17:02:22.949898 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"69379820-3062-4964-a8dd-8689f8cea38d","Type":"ContainerStarted","Data":"beeb8e8929ad597a53e5bcbe203dbd0aeea7fb6f4cfbcd350384cfbddded9459"} Jan 26 17:02:22 crc kubenswrapper[4856]: I0126 17:02:22.949956 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"69379820-3062-4964-a8dd-8689f8cea38d","Type":"ContainerStarted","Data":"af60ab5d4a2b57ad1bbcc4a879fdc9dce5f1b3ef1e2f5eb96e13241cdf6f2277"} Jan 26 17:02:22 crc kubenswrapper[4856]: E0126 17:02:22.953405 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-qgjjd" podUID="89cf05de-642b-4574-9f79-45e7a3d4afa3" Jan 26 17:02:22 crc kubenswrapper[4856]: E0126 17:02:22.955970 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-n8hp2" podUID="a6086d4b-faeb-4a12-8e6a-2a178dfe374c" Jan 26 17:02:22 crc kubenswrapper[4856]: E0126 17:02:22.958003 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-g8bgt" podUID="0d7eb7b8-63ae-493a-850b-0b9f3b42e927" Jan 26 17:02:22 crc kubenswrapper[4856]: E0126 17:02:22.969756 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-mqxwf" podUID="9c71e219-35d7-4e1e-a371-3456dfd29e83" Jan 26 17:02:23 crc kubenswrapper[4856]: I0126 17:02:23.958132 4856 generic.go:334] "Generic (PLEG): container finished" podID="55a1283a-85e0-497f-8c5d-9a28168cb810" containerID="7d4c917e0830be1aeb1e35a23c1be0bcb5487e9bceffc19d68bcd09d48f247ad" exitCode=0 Jan 26 17:02:23 crc kubenswrapper[4856]: I0126 17:02:23.958201 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"55a1283a-85e0-497f-8c5d-9a28168cb810","Type":"ContainerDied","Data":"7d4c917e0830be1aeb1e35a23c1be0bcb5487e9bceffc19d68bcd09d48f247ad"} Jan 26 17:02:23 crc kubenswrapper[4856]: I0126 17:02:23.991611 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=25.991585429 podStartE2EDuration="25.991585429s" podCreationTimestamp="2026-01-26 17:01:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:02:23.989468278 +0000 UTC m=+239.942722299" watchObservedRunningTime="2026-01-26 17:02:23.991585429 +0000 UTC m=+239.944839420" Jan 26 17:02:25 crc kubenswrapper[4856]: I0126 17:02:25.221395 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 17:02:25 crc kubenswrapper[4856]: I0126 17:02:25.405909 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55a1283a-85e0-497f-8c5d-9a28168cb810-kube-api-access\") pod \"55a1283a-85e0-497f-8c5d-9a28168cb810\" (UID: \"55a1283a-85e0-497f-8c5d-9a28168cb810\") " Jan 26 17:02:25 crc kubenswrapper[4856]: I0126 17:02:25.406030 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/55a1283a-85e0-497f-8c5d-9a28168cb810-kubelet-dir\") pod \"55a1283a-85e0-497f-8c5d-9a28168cb810\" (UID: \"55a1283a-85e0-497f-8c5d-9a28168cb810\") " Jan 26 17:02:25 crc kubenswrapper[4856]: I0126 17:02:25.406073 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55a1283a-85e0-497f-8c5d-9a28168cb810-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "55a1283a-85e0-497f-8c5d-9a28168cb810" (UID: "55a1283a-85e0-497f-8c5d-9a28168cb810"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:02:25 crc kubenswrapper[4856]: I0126 17:02:25.406653 4856 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/55a1283a-85e0-497f-8c5d-9a28168cb810-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 17:02:25 crc kubenswrapper[4856]: I0126 17:02:25.411692 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55a1283a-85e0-497f-8c5d-9a28168cb810-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "55a1283a-85e0-497f-8c5d-9a28168cb810" (UID: "55a1283a-85e0-497f-8c5d-9a28168cb810"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:02:25 crc kubenswrapper[4856]: I0126 17:02:25.508413 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55a1283a-85e0-497f-8c5d-9a28168cb810-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 17:02:25 crc kubenswrapper[4856]: I0126 17:02:25.975543 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"55a1283a-85e0-497f-8c5d-9a28168cb810","Type":"ContainerDied","Data":"c8b87a1aff72cc4fdfed70fd560545e58b33f1c048b47cb99c85517d64eba518"} Jan 26 17:02:25 crc kubenswrapper[4856]: I0126 17:02:25.975590 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8b87a1aff72cc4fdfed70fd560545e58b33f1c048b47cb99c85517d64eba518" Jan 26 17:02:25 crc kubenswrapper[4856]: I0126 17:02:25.975671 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 17:02:26 crc kubenswrapper[4856]: I0126 17:02:26.939277 4856 patch_prober.go:28] interesting pod/machine-config-daemon-xm9cq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:02:26 crc kubenswrapper[4856]: I0126 17:02:26.939332 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:02:26 crc kubenswrapper[4856]: I0126 17:02:26.939380 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" Jan 26 17:02:26 crc kubenswrapper[4856]: I0126 17:02:26.939910 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18"} pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 17:02:26 crc kubenswrapper[4856]: I0126 17:02:26.939954 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" containerName="machine-config-daemon" containerID="cri-o://54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18" gracePeriod=600 Jan 26 17:02:27 crc kubenswrapper[4856]: I0126 17:02:27.987135 4856 generic.go:334] "Generic (PLEG): container finished" podID="63c75ede-5170-4db0-811b-5217ef8d72b3" containerID="54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18" exitCode=0 Jan 26 17:02:27 crc kubenswrapper[4856]: I0126 17:02:27.987301 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" event={"ID":"63c75ede-5170-4db0-811b-5217ef8d72b3","Type":"ContainerDied","Data":"54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18"} Jan 26 17:02:29 crc kubenswrapper[4856]: I0126 17:02:29.538678 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" event={"ID":"63c75ede-5170-4db0-811b-5217ef8d72b3","Type":"ContainerStarted","Data":"9758bfdfd1807e791935ac7ec93246863e5867351e35d27ffaff68ae79110e9c"} Jan 26 17:02:30 crc kubenswrapper[4856]: I0126 17:02:30.537074 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txmdl" event={"ID":"40a27476-22b1-4083-990e-66e70ccdaf4c","Type":"ContainerStarted","Data":"4be8cc185ffcb38acac0516b9ba74f7fa439552b3ff463ccf813f91341bce48c"} Jan 26 17:02:31 crc kubenswrapper[4856]: I0126 17:02:31.543467 4856 generic.go:334] "Generic (PLEG): container finished" podID="40a27476-22b1-4083-990e-66e70ccdaf4c" containerID="4be8cc185ffcb38acac0516b9ba74f7fa439552b3ff463ccf813f91341bce48c" exitCode=0 Jan 26 17:02:31 crc kubenswrapper[4856]: I0126 17:02:31.543555 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txmdl" event={"ID":"40a27476-22b1-4083-990e-66e70ccdaf4c","Type":"ContainerDied","Data":"4be8cc185ffcb38acac0516b9ba74f7fa439552b3ff463ccf813f91341bce48c"} Jan 26 17:02:33 crc kubenswrapper[4856]: I0126 17:02:33.556845 4856 generic.go:334] "Generic (PLEG): container finished" podID="d6944fc9-b8d7-4013-8702-b5765c410a0b" containerID="25c95d58185d3429ff473dbc3a21342905624b31dd17e96573eb140be4c2402c" exitCode=0 Jan 26 17:02:33 crc kubenswrapper[4856]: I0126 17:02:33.558214 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kwt4" event={"ID":"d6944fc9-b8d7-4013-8702-b5765c410a0b","Type":"ContainerDied","Data":"25c95d58185d3429ff473dbc3a21342905624b31dd17e96573eb140be4c2402c"} Jan 26 17:02:33 crc kubenswrapper[4856]: I0126 17:02:33.561404 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txmdl" event={"ID":"40a27476-22b1-4083-990e-66e70ccdaf4c","Type":"ContainerStarted","Data":"d2eb8e794ba046c4faf481951340700b775e721a354e0ab5bea576c03e396810"} Jan 26 17:02:33 crc kubenswrapper[4856]: I0126 17:02:33.566884 4856 generic.go:334] "Generic (PLEG): container finished" podID="7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766" containerID="d684603f69b61a1ce87ec7d1d3ef00e518372571ee64ede6a51ce75afd2227ca" exitCode=0 Jan 26 17:02:33 crc kubenswrapper[4856]: I0126 17:02:33.566929 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62nhd" event={"ID":"7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766","Type":"ContainerDied","Data":"d684603f69b61a1ce87ec7d1d3ef00e518372571ee64ede6a51ce75afd2227ca"} Jan 26 17:02:33 crc kubenswrapper[4856]: I0126 17:02:33.782186 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-txmdl" podStartSLOduration=5.360780902 podStartE2EDuration="1m16.782163275s" podCreationTimestamp="2026-01-26 17:01:17 +0000 UTC" firstStartedPulling="2026-01-26 17:01:21.086300628 +0000 UTC m=+177.039554609" lastFinishedPulling="2026-01-26 17:02:32.507683001 +0000 UTC m=+248.460936982" observedRunningTime="2026-01-26 17:02:33.630036358 +0000 UTC m=+249.583290339" watchObservedRunningTime="2026-01-26 17:02:33.782163275 +0000 UTC m=+249.735417256" Jan 26 17:02:33 crc kubenswrapper[4856]: I0126 17:02:33.783682 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-dnvfq"] Jan 26 17:02:33 crc kubenswrapper[4856]: E0126 17:02:33.783984 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55a1283a-85e0-497f-8c5d-9a28168cb810" containerName="pruner" Jan 26 17:02:33 crc kubenswrapper[4856]: I0126 17:02:33.784005 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="55a1283a-85e0-497f-8c5d-9a28168cb810" containerName="pruner" Jan 26 17:02:33 crc kubenswrapper[4856]: I0126 17:02:33.784143 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="55a1283a-85e0-497f-8c5d-9a28168cb810" containerName="pruner" Jan 26 17:02:33 crc kubenswrapper[4856]: I0126 17:02:33.784716 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-dnvfq" Jan 26 17:02:33 crc kubenswrapper[4856]: I0126 17:02:33.805553 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-dnvfq"] Jan 26 17:02:33 crc kubenswrapper[4856]: I0126 17:02:33.886417 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/26fc27be-aaf6-4ce1-9b90-0bca184e8f12-trusted-ca\") pod \"image-registry-66df7c8f76-dnvfq\" (UID: \"26fc27be-aaf6-4ce1-9b90-0bca184e8f12\") " pod="openshift-image-registry/image-registry-66df7c8f76-dnvfq" Jan 26 17:02:33 crc kubenswrapper[4856]: I0126 17:02:33.886748 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/26fc27be-aaf6-4ce1-9b90-0bca184e8f12-registry-certificates\") pod \"image-registry-66df7c8f76-dnvfq\" (UID: \"26fc27be-aaf6-4ce1-9b90-0bca184e8f12\") " pod="openshift-image-registry/image-registry-66df7c8f76-dnvfq" Jan 26 17:02:33 crc kubenswrapper[4856]: I0126 17:02:33.886900 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-dnvfq\" (UID: \"26fc27be-aaf6-4ce1-9b90-0bca184e8f12\") " pod="openshift-image-registry/image-registry-66df7c8f76-dnvfq" Jan 26 17:02:33 crc kubenswrapper[4856]: I0126 17:02:33.887086 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/26fc27be-aaf6-4ce1-9b90-0bca184e8f12-ca-trust-extracted\") pod \"image-registry-66df7c8f76-dnvfq\" (UID: \"26fc27be-aaf6-4ce1-9b90-0bca184e8f12\") " pod="openshift-image-registry/image-registry-66df7c8f76-dnvfq" Jan 26 17:02:33 crc kubenswrapper[4856]: I0126 17:02:33.887264 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/26fc27be-aaf6-4ce1-9b90-0bca184e8f12-registry-tls\") pod \"image-registry-66df7c8f76-dnvfq\" (UID: \"26fc27be-aaf6-4ce1-9b90-0bca184e8f12\") " pod="openshift-image-registry/image-registry-66df7c8f76-dnvfq" Jan 26 17:02:33 crc kubenswrapper[4856]: I0126 17:02:33.887427 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/26fc27be-aaf6-4ce1-9b90-0bca184e8f12-bound-sa-token\") pod \"image-registry-66df7c8f76-dnvfq\" (UID: \"26fc27be-aaf6-4ce1-9b90-0bca184e8f12\") " pod="openshift-image-registry/image-registry-66df7c8f76-dnvfq" Jan 26 17:02:33 crc kubenswrapper[4856]: I0126 17:02:33.887592 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/26fc27be-aaf6-4ce1-9b90-0bca184e8f12-installation-pull-secrets\") pod \"image-registry-66df7c8f76-dnvfq\" (UID: \"26fc27be-aaf6-4ce1-9b90-0bca184e8f12\") " pod="openshift-image-registry/image-registry-66df7c8f76-dnvfq" Jan 26 17:02:33 crc kubenswrapper[4856]: I0126 17:02:33.887744 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmbf7\" (UniqueName: \"kubernetes.io/projected/26fc27be-aaf6-4ce1-9b90-0bca184e8f12-kube-api-access-mmbf7\") pod \"image-registry-66df7c8f76-dnvfq\" (UID: \"26fc27be-aaf6-4ce1-9b90-0bca184e8f12\") " pod="openshift-image-registry/image-registry-66df7c8f76-dnvfq" Jan 26 17:02:33 crc kubenswrapper[4856]: I0126 17:02:33.920661 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-dnvfq\" (UID: \"26fc27be-aaf6-4ce1-9b90-0bca184e8f12\") " pod="openshift-image-registry/image-registry-66df7c8f76-dnvfq" Jan 26 17:02:33 crc kubenswrapper[4856]: I0126 17:02:33.989181 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmbf7\" (UniqueName: \"kubernetes.io/projected/26fc27be-aaf6-4ce1-9b90-0bca184e8f12-kube-api-access-mmbf7\") pod \"image-registry-66df7c8f76-dnvfq\" (UID: \"26fc27be-aaf6-4ce1-9b90-0bca184e8f12\") " pod="openshift-image-registry/image-registry-66df7c8f76-dnvfq" Jan 26 17:02:33 crc kubenswrapper[4856]: I0126 17:02:33.989239 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/26fc27be-aaf6-4ce1-9b90-0bca184e8f12-trusted-ca\") pod \"image-registry-66df7c8f76-dnvfq\" (UID: \"26fc27be-aaf6-4ce1-9b90-0bca184e8f12\") " pod="openshift-image-registry/image-registry-66df7c8f76-dnvfq" Jan 26 17:02:33 crc kubenswrapper[4856]: I0126 17:02:33.989281 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/26fc27be-aaf6-4ce1-9b90-0bca184e8f12-registry-certificates\") pod \"image-registry-66df7c8f76-dnvfq\" (UID: \"26fc27be-aaf6-4ce1-9b90-0bca184e8f12\") " pod="openshift-image-registry/image-registry-66df7c8f76-dnvfq" Jan 26 17:02:33 crc kubenswrapper[4856]: I0126 17:02:33.989341 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/26fc27be-aaf6-4ce1-9b90-0bca184e8f12-ca-trust-extracted\") pod \"image-registry-66df7c8f76-dnvfq\" (UID: \"26fc27be-aaf6-4ce1-9b90-0bca184e8f12\") " pod="openshift-image-registry/image-registry-66df7c8f76-dnvfq" Jan 26 17:02:33 crc kubenswrapper[4856]: I0126 17:02:33.989375 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/26fc27be-aaf6-4ce1-9b90-0bca184e8f12-registry-tls\") pod \"image-registry-66df7c8f76-dnvfq\" (UID: \"26fc27be-aaf6-4ce1-9b90-0bca184e8f12\") " pod="openshift-image-registry/image-registry-66df7c8f76-dnvfq" Jan 26 17:02:33 crc kubenswrapper[4856]: I0126 17:02:33.989411 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/26fc27be-aaf6-4ce1-9b90-0bca184e8f12-bound-sa-token\") pod \"image-registry-66df7c8f76-dnvfq\" (UID: \"26fc27be-aaf6-4ce1-9b90-0bca184e8f12\") " pod="openshift-image-registry/image-registry-66df7c8f76-dnvfq" Jan 26 17:02:33 crc kubenswrapper[4856]: I0126 17:02:33.989437 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/26fc27be-aaf6-4ce1-9b90-0bca184e8f12-installation-pull-secrets\") pod \"image-registry-66df7c8f76-dnvfq\" (UID: \"26fc27be-aaf6-4ce1-9b90-0bca184e8f12\") " pod="openshift-image-registry/image-registry-66df7c8f76-dnvfq" Jan 26 17:02:33 crc kubenswrapper[4856]: I0126 17:02:33.990935 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/26fc27be-aaf6-4ce1-9b90-0bca184e8f12-ca-trust-extracted\") pod \"image-registry-66df7c8f76-dnvfq\" (UID: \"26fc27be-aaf6-4ce1-9b90-0bca184e8f12\") " pod="openshift-image-registry/image-registry-66df7c8f76-dnvfq" Jan 26 17:02:33 crc kubenswrapper[4856]: I0126 17:02:33.991414 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/26fc27be-aaf6-4ce1-9b90-0bca184e8f12-trusted-ca\") pod \"image-registry-66df7c8f76-dnvfq\" (UID: \"26fc27be-aaf6-4ce1-9b90-0bca184e8f12\") " pod="openshift-image-registry/image-registry-66df7c8f76-dnvfq" Jan 26 17:02:33 crc kubenswrapper[4856]: I0126 17:02:33.994032 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/26fc27be-aaf6-4ce1-9b90-0bca184e8f12-registry-certificates\") pod \"image-registry-66df7c8f76-dnvfq\" (UID: \"26fc27be-aaf6-4ce1-9b90-0bca184e8f12\") " pod="openshift-image-registry/image-registry-66df7c8f76-dnvfq" Jan 26 17:02:33 crc kubenswrapper[4856]: I0126 17:02:33.995441 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/26fc27be-aaf6-4ce1-9b90-0bca184e8f12-installation-pull-secrets\") pod \"image-registry-66df7c8f76-dnvfq\" (UID: \"26fc27be-aaf6-4ce1-9b90-0bca184e8f12\") " pod="openshift-image-registry/image-registry-66df7c8f76-dnvfq" Jan 26 17:02:33 crc kubenswrapper[4856]: I0126 17:02:33.995467 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/26fc27be-aaf6-4ce1-9b90-0bca184e8f12-registry-tls\") pod \"image-registry-66df7c8f76-dnvfq\" (UID: \"26fc27be-aaf6-4ce1-9b90-0bca184e8f12\") " pod="openshift-image-registry/image-registry-66df7c8f76-dnvfq" Jan 26 17:02:34 crc kubenswrapper[4856]: I0126 17:02:34.026840 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmbf7\" (UniqueName: \"kubernetes.io/projected/26fc27be-aaf6-4ce1-9b90-0bca184e8f12-kube-api-access-mmbf7\") pod \"image-registry-66df7c8f76-dnvfq\" (UID: \"26fc27be-aaf6-4ce1-9b90-0bca184e8f12\") " pod="openshift-image-registry/image-registry-66df7c8f76-dnvfq" Jan 26 17:02:34 crc kubenswrapper[4856]: I0126 17:02:34.028620 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/26fc27be-aaf6-4ce1-9b90-0bca184e8f12-bound-sa-token\") pod \"image-registry-66df7c8f76-dnvfq\" (UID: \"26fc27be-aaf6-4ce1-9b90-0bca184e8f12\") " pod="openshift-image-registry/image-registry-66df7c8f76-dnvfq" Jan 26 17:02:34 crc kubenswrapper[4856]: I0126 17:02:34.100704 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-dnvfq" Jan 26 17:02:34 crc kubenswrapper[4856]: I0126 17:02:34.521698 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-dnvfq"] Jan 26 17:02:34 crc kubenswrapper[4856]: W0126 17:02:34.524925 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26fc27be_aaf6_4ce1_9b90_0bca184e8f12.slice/crio-04a92d7e0364d4e4c12781d3d1a2e91386de31dbf935f68165ee75d7491184f3 WatchSource:0}: Error finding container 04a92d7e0364d4e4c12781d3d1a2e91386de31dbf935f68165ee75d7491184f3: Status 404 returned error can't find the container with id 04a92d7e0364d4e4c12781d3d1a2e91386de31dbf935f68165ee75d7491184f3 Jan 26 17:02:34 crc kubenswrapper[4856]: I0126 17:02:34.574335 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62nhd" event={"ID":"7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766","Type":"ContainerStarted","Data":"67c41d7af13d33af9423d069e86b531ff9d226b1435b62347517f490f3904943"} Jan 26 17:02:34 crc kubenswrapper[4856]: I0126 17:02:34.577512 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kwt4" event={"ID":"d6944fc9-b8d7-4013-8702-b5765c410a0b","Type":"ContainerStarted","Data":"a6302eb5e39718f049dce88c0f2a26632538c4eb99b7dbea0dca8f7aae8306c7"} Jan 26 17:02:34 crc kubenswrapper[4856]: I0126 17:02:34.579759 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-dnvfq" event={"ID":"26fc27be-aaf6-4ce1-9b90-0bca184e8f12","Type":"ContainerStarted","Data":"04a92d7e0364d4e4c12781d3d1a2e91386de31dbf935f68165ee75d7491184f3"} Jan 26 17:02:34 crc kubenswrapper[4856]: I0126 17:02:34.601297 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-62nhd" podStartSLOduration=4.791239977 podStartE2EDuration="1m17.601280986s" podCreationTimestamp="2026-01-26 17:01:17 +0000 UTC" firstStartedPulling="2026-01-26 17:01:21.14980898 +0000 UTC m=+177.103062961" lastFinishedPulling="2026-01-26 17:02:33.959849989 +0000 UTC m=+249.913103970" observedRunningTime="2026-01-26 17:02:34.596784196 +0000 UTC m=+250.550038177" watchObservedRunningTime="2026-01-26 17:02:34.601280986 +0000 UTC m=+250.554534967" Jan 26 17:02:34 crc kubenswrapper[4856]: I0126 17:02:34.619736 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4kwt4" podStartSLOduration=2.718136475 podStartE2EDuration="1m15.619695987s" podCreationTimestamp="2026-01-26 17:01:19 +0000 UTC" firstStartedPulling="2026-01-26 17:01:21.081665711 +0000 UTC m=+177.034919692" lastFinishedPulling="2026-01-26 17:02:33.983225223 +0000 UTC m=+249.936479204" observedRunningTime="2026-01-26 17:02:34.617097442 +0000 UTC m=+250.570351443" watchObservedRunningTime="2026-01-26 17:02:34.619695987 +0000 UTC m=+250.572949968" Jan 26 17:02:35 crc kubenswrapper[4856]: I0126 17:02:35.585998 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-dnvfq" event={"ID":"26fc27be-aaf6-4ce1-9b90-0bca184e8f12","Type":"ContainerStarted","Data":"6ad5234eb21fa7846692d1d599422d832493eb58c2a0ff344f7f0e1e1fea0b14"} Jan 26 17:02:35 crc kubenswrapper[4856]: I0126 17:02:35.586321 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-dnvfq" Jan 26 17:02:35 crc kubenswrapper[4856]: I0126 17:02:35.617971 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-dnvfq" podStartSLOduration=2.617946525 podStartE2EDuration="2.617946525s" podCreationTimestamp="2026-01-26 17:02:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:02:35.610617953 +0000 UTC m=+251.563871944" watchObservedRunningTime="2026-01-26 17:02:35.617946525 +0000 UTC m=+251.571200506" Jan 26 17:02:37 crc kubenswrapper[4856]: I0126 17:02:37.598005 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8bgt" event={"ID":"0d7eb7b8-63ae-493a-850b-0b9f3b42e927","Type":"ContainerStarted","Data":"3926e8ab5a46c920f3ea8cad2d006d4f4059fc6b8c475a7f6f3a22211a28d019"} Jan 26 17:02:37 crc kubenswrapper[4856]: I0126 17:02:37.816758 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-txmdl" Jan 26 17:02:37 crc kubenswrapper[4856]: I0126 17:02:37.816845 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-txmdl" Jan 26 17:02:38 crc kubenswrapper[4856]: I0126 17:02:38.091202 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-txmdl" Jan 26 17:02:38 crc kubenswrapper[4856]: I0126 17:02:38.193046 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-62nhd" Jan 26 17:02:38 crc kubenswrapper[4856]: I0126 17:02:38.193315 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-62nhd" Jan 26 17:02:38 crc kubenswrapper[4856]: I0126 17:02:38.233212 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-62nhd" Jan 26 17:02:38 crc kubenswrapper[4856]: I0126 17:02:38.605050 4856 generic.go:334] "Generic (PLEG): container finished" podID="0d7eb7b8-63ae-493a-850b-0b9f3b42e927" containerID="3926e8ab5a46c920f3ea8cad2d006d4f4059fc6b8c475a7f6f3a22211a28d019" exitCode=0 Jan 26 17:02:38 crc kubenswrapper[4856]: I0126 17:02:38.605160 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8bgt" event={"ID":"0d7eb7b8-63ae-493a-850b-0b9f3b42e927","Type":"ContainerDied","Data":"3926e8ab5a46c920f3ea8cad2d006d4f4059fc6b8c475a7f6f3a22211a28d019"} Jan 26 17:02:38 crc kubenswrapper[4856]: I0126 17:02:38.696303 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-txmdl" Jan 26 17:02:39 crc kubenswrapper[4856]: I0126 17:02:39.652696 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-62nhd" Jan 26 17:02:39 crc kubenswrapper[4856]: I0126 17:02:39.894102 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4kwt4" Jan 26 17:02:39 crc kubenswrapper[4856]: I0126 17:02:39.894397 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4kwt4" Jan 26 17:02:39 crc kubenswrapper[4856]: I0126 17:02:39.928165 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4kwt4" Jan 26 17:02:40 crc kubenswrapper[4856]: I0126 17:02:40.655652 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4kwt4" Jan 26 17:02:40 crc kubenswrapper[4856]: I0126 17:02:40.967644 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-62nhd"] Jan 26 17:02:41 crc kubenswrapper[4856]: I0126 17:02:41.621481 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-62nhd" podUID="7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766" containerName="registry-server" containerID="cri-o://67c41d7af13d33af9423d069e86b531ff9d226b1435b62347517f490f3904943" gracePeriod=2 Jan 26 17:02:44 crc kubenswrapper[4856]: I0126 17:02:44.073555 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-txmdl"] Jan 26 17:02:44 crc kubenswrapper[4856]: I0126 17:02:44.074320 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-txmdl" podUID="40a27476-22b1-4083-990e-66e70ccdaf4c" containerName="registry-server" containerID="cri-o://d2eb8e794ba046c4faf481951340700b775e721a354e0ab5bea576c03e396810" gracePeriod=30 Jan 26 17:02:44 crc kubenswrapper[4856]: I0126 17:02:44.090955 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n8hp2"] Jan 26 17:02:44 crc kubenswrapper[4856]: I0126 17:02:44.097467 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qgjjd"] Jan 26 17:02:44 crc kubenswrapper[4856]: I0126 17:02:44.109910 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wvttb"] Jan 26 17:02:44 crc kubenswrapper[4856]: I0126 17:02:44.110368 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-wvttb" podUID="2d37efbf-d18f-486b-9b43-bc4d181af4ca" containerName="marketplace-operator" containerID="cri-o://743ebe09ef635c21a62370a80c15b76e3ff5e7e1801bb955f28ed30f848dcca9" gracePeriod=30 Jan 26 17:02:44 crc kubenswrapper[4856]: I0126 17:02:44.123218 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4kwt4"] Jan 26 17:02:44 crc kubenswrapper[4856]: I0126 17:02:44.123591 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4kwt4" podUID="d6944fc9-b8d7-4013-8702-b5765c410a0b" containerName="registry-server" containerID="cri-o://a6302eb5e39718f049dce88c0f2a26632538c4eb99b7dbea0dca8f7aae8306c7" gracePeriod=30 Jan 26 17:02:44 crc kubenswrapper[4856]: I0126 17:02:44.152865 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tdtfh"] Jan 26 17:02:44 crc kubenswrapper[4856]: I0126 17:02:44.175841 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g8bgt"] Jan 26 17:02:44 crc kubenswrapper[4856]: I0126 17:02:44.176020 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tdtfh" Jan 26 17:02:44 crc kubenswrapper[4856]: I0126 17:02:44.177486 4856 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-wvttb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Jan 26 17:02:44 crc kubenswrapper[4856]: I0126 17:02:44.177557 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-wvttb" podUID="2d37efbf-d18f-486b-9b43-bc4d181af4ca" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Jan 26 17:02:44 crc kubenswrapper[4856]: I0126 17:02:44.181060 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mqxwf"] Jan 26 17:02:44 crc kubenswrapper[4856]: I0126 17:02:44.194666 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tdtfh"] Jan 26 17:02:44 crc kubenswrapper[4856]: I0126 17:02:44.196498 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qknj9"] Jan 26 17:02:44 crc kubenswrapper[4856]: I0126 17:02:44.275223 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/566ca894-037a-4b73-95d4-a6246c7c851a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tdtfh\" (UID: \"566ca894-037a-4b73-95d4-a6246c7c851a\") " pod="openshift-marketplace/marketplace-operator-79b997595-tdtfh" Jan 26 17:02:44 crc kubenswrapper[4856]: I0126 17:02:44.275284 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmkjj\" (UniqueName: \"kubernetes.io/projected/566ca894-037a-4b73-95d4-a6246c7c851a-kube-api-access-wmkjj\") pod \"marketplace-operator-79b997595-tdtfh\" (UID: \"566ca894-037a-4b73-95d4-a6246c7c851a\") " pod="openshift-marketplace/marketplace-operator-79b997595-tdtfh" Jan 26 17:02:44 crc kubenswrapper[4856]: I0126 17:02:44.275353 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/566ca894-037a-4b73-95d4-a6246c7c851a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tdtfh\" (UID: \"566ca894-037a-4b73-95d4-a6246c7c851a\") " pod="openshift-marketplace/marketplace-operator-79b997595-tdtfh" Jan 26 17:02:44 crc kubenswrapper[4856]: I0126 17:02:44.376589 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/566ca894-037a-4b73-95d4-a6246c7c851a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tdtfh\" (UID: \"566ca894-037a-4b73-95d4-a6246c7c851a\") " pod="openshift-marketplace/marketplace-operator-79b997595-tdtfh" Jan 26 17:02:44 crc kubenswrapper[4856]: I0126 17:02:44.376971 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmkjj\" (UniqueName: \"kubernetes.io/projected/566ca894-037a-4b73-95d4-a6246c7c851a-kube-api-access-wmkjj\") pod \"marketplace-operator-79b997595-tdtfh\" (UID: \"566ca894-037a-4b73-95d4-a6246c7c851a\") " pod="openshift-marketplace/marketplace-operator-79b997595-tdtfh" Jan 26 17:02:44 crc kubenswrapper[4856]: I0126 17:02:44.377099 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/566ca894-037a-4b73-95d4-a6246c7c851a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tdtfh\" (UID: \"566ca894-037a-4b73-95d4-a6246c7c851a\") " pod="openshift-marketplace/marketplace-operator-79b997595-tdtfh" Jan 26 17:02:44 crc kubenswrapper[4856]: I0126 17:02:44.377839 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/566ca894-037a-4b73-95d4-a6246c7c851a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tdtfh\" (UID: \"566ca894-037a-4b73-95d4-a6246c7c851a\") " pod="openshift-marketplace/marketplace-operator-79b997595-tdtfh" Jan 26 17:02:44 crc kubenswrapper[4856]: I0126 17:02:44.382268 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/566ca894-037a-4b73-95d4-a6246c7c851a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tdtfh\" (UID: \"566ca894-037a-4b73-95d4-a6246c7c851a\") " pod="openshift-marketplace/marketplace-operator-79b997595-tdtfh" Jan 26 17:02:44 crc kubenswrapper[4856]: I0126 17:02:44.397876 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmkjj\" (UniqueName: \"kubernetes.io/projected/566ca894-037a-4b73-95d4-a6246c7c851a-kube-api-access-wmkjj\") pod \"marketplace-operator-79b997595-tdtfh\" (UID: \"566ca894-037a-4b73-95d4-a6246c7c851a\") " pod="openshift-marketplace/marketplace-operator-79b997595-tdtfh" Jan 26 17:02:44 crc kubenswrapper[4856]: I0126 17:02:44.509393 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tdtfh" Jan 26 17:02:44 crc kubenswrapper[4856]: I0126 17:02:44.626683 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cb8nk"] Jan 26 17:02:46 crc kubenswrapper[4856]: I0126 17:02:46.648167 4856 generic.go:334] "Generic (PLEG): container finished" podID="2d37efbf-d18f-486b-9b43-bc4d181af4ca" containerID="743ebe09ef635c21a62370a80c15b76e3ff5e7e1801bb955f28ed30f848dcca9" exitCode=0 Jan 26 17:02:46 crc kubenswrapper[4856]: I0126 17:02:46.648260 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wvttb" event={"ID":"2d37efbf-d18f-486b-9b43-bc4d181af4ca","Type":"ContainerDied","Data":"743ebe09ef635c21a62370a80c15b76e3ff5e7e1801bb955f28ed30f848dcca9"} Jan 26 17:02:46 crc kubenswrapper[4856]: I0126 17:02:46.650781 4856 generic.go:334] "Generic (PLEG): container finished" podID="40a27476-22b1-4083-990e-66e70ccdaf4c" containerID="d2eb8e794ba046c4faf481951340700b775e721a354e0ab5bea576c03e396810" exitCode=0 Jan 26 17:02:46 crc kubenswrapper[4856]: I0126 17:02:46.650857 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txmdl" event={"ID":"40a27476-22b1-4083-990e-66e70ccdaf4c","Type":"ContainerDied","Data":"d2eb8e794ba046c4faf481951340700b775e721a354e0ab5bea576c03e396810"} Jan 26 17:02:46 crc kubenswrapper[4856]: I0126 17:02:46.652376 4856 generic.go:334] "Generic (PLEG): container finished" podID="7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766" containerID="67c41d7af13d33af9423d069e86b531ff9d226b1435b62347517f490f3904943" exitCode=0 Jan 26 17:02:46 crc kubenswrapper[4856]: I0126 17:02:46.652446 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62nhd" event={"ID":"7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766","Type":"ContainerDied","Data":"67c41d7af13d33af9423d069e86b531ff9d226b1435b62347517f490f3904943"} Jan 26 17:02:47 crc kubenswrapper[4856]: I0126 17:02:47.659300 4856 generic.go:334] "Generic (PLEG): container finished" podID="d6944fc9-b8d7-4013-8702-b5765c410a0b" containerID="a6302eb5e39718f049dce88c0f2a26632538c4eb99b7dbea0dca8f7aae8306c7" exitCode=0 Jan 26 17:02:47 crc kubenswrapper[4856]: I0126 17:02:47.659382 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kwt4" event={"ID":"d6944fc9-b8d7-4013-8702-b5765c410a0b","Type":"ContainerDied","Data":"a6302eb5e39718f049dce88c0f2a26632538c4eb99b7dbea0dca8f7aae8306c7"} Jan 26 17:02:47 crc kubenswrapper[4856]: E0126 17:02:47.817232 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d2eb8e794ba046c4faf481951340700b775e721a354e0ab5bea576c03e396810 is running failed: container process not found" containerID="d2eb8e794ba046c4faf481951340700b775e721a354e0ab5bea576c03e396810" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 17:02:47 crc kubenswrapper[4856]: E0126 17:02:47.817691 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d2eb8e794ba046c4faf481951340700b775e721a354e0ab5bea576c03e396810 is running failed: container process not found" containerID="d2eb8e794ba046c4faf481951340700b775e721a354e0ab5bea576c03e396810" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 17:02:47 crc kubenswrapper[4856]: E0126 17:02:47.818130 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d2eb8e794ba046c4faf481951340700b775e721a354e0ab5bea576c03e396810 is running failed: container process not found" containerID="d2eb8e794ba046c4faf481951340700b775e721a354e0ab5bea576c03e396810" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 17:02:47 crc kubenswrapper[4856]: E0126 17:02:47.818167 4856 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d2eb8e794ba046c4faf481951340700b775e721a354e0ab5bea576c03e396810 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-txmdl" podUID="40a27476-22b1-4083-990e-66e70ccdaf4c" containerName="registry-server" Jan 26 17:02:48 crc kubenswrapper[4856]: E0126 17:02:48.193442 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 67c41d7af13d33af9423d069e86b531ff9d226b1435b62347517f490f3904943 is running failed: container process not found" containerID="67c41d7af13d33af9423d069e86b531ff9d226b1435b62347517f490f3904943" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 17:02:48 crc kubenswrapper[4856]: E0126 17:02:48.193816 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 67c41d7af13d33af9423d069e86b531ff9d226b1435b62347517f490f3904943 is running failed: container process not found" containerID="67c41d7af13d33af9423d069e86b531ff9d226b1435b62347517f490f3904943" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 17:02:48 crc kubenswrapper[4856]: E0126 17:02:48.194319 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 67c41d7af13d33af9423d069e86b531ff9d226b1435b62347517f490f3904943 is running failed: container process not found" containerID="67c41d7af13d33af9423d069e86b531ff9d226b1435b62347517f490f3904943" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 17:02:48 crc kubenswrapper[4856]: E0126 17:02:48.194357 4856 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 67c41d7af13d33af9423d069e86b531ff9d226b1435b62347517f490f3904943 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-62nhd" podUID="7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766" containerName="registry-server" Jan 26 17:02:49 crc kubenswrapper[4856]: E0126 17:02:49.895157 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a6302eb5e39718f049dce88c0f2a26632538c4eb99b7dbea0dca8f7aae8306c7 is running failed: container process not found" containerID="a6302eb5e39718f049dce88c0f2a26632538c4eb99b7dbea0dca8f7aae8306c7" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 17:02:49 crc kubenswrapper[4856]: E0126 17:02:49.896078 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a6302eb5e39718f049dce88c0f2a26632538c4eb99b7dbea0dca8f7aae8306c7 is running failed: container process not found" containerID="a6302eb5e39718f049dce88c0f2a26632538c4eb99b7dbea0dca8f7aae8306c7" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 17:02:49 crc kubenswrapper[4856]: E0126 17:02:49.896449 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a6302eb5e39718f049dce88c0f2a26632538c4eb99b7dbea0dca8f7aae8306c7 is running failed: container process not found" containerID="a6302eb5e39718f049dce88c0f2a26632538c4eb99b7dbea0dca8f7aae8306c7" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 17:02:49 crc kubenswrapper[4856]: E0126 17:02:49.896487 4856 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a6302eb5e39718f049dce88c0f2a26632538c4eb99b7dbea0dca8f7aae8306c7 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-4kwt4" podUID="d6944fc9-b8d7-4013-8702-b5765c410a0b" containerName="registry-server" Jan 26 17:02:51 crc kubenswrapper[4856]: I0126 17:02:51.684567 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62nhd" event={"ID":"7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766","Type":"ContainerDied","Data":"54783f51c7d33737624b9dffb5983a3ed107d31d30f3fd03ab73e5627dfd4bfd"} Jan 26 17:02:51 crc kubenswrapper[4856]: I0126 17:02:51.685007 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54783f51c7d33737624b9dffb5983a3ed107d31d30f3fd03ab73e5627dfd4bfd" Jan 26 17:02:51 crc kubenswrapper[4856]: I0126 17:02:51.685943 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-62nhd" Jan 26 17:02:51 crc kubenswrapper[4856]: I0126 17:02:51.790274 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s92lp\" (UniqueName: \"kubernetes.io/projected/7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766-kube-api-access-s92lp\") pod \"7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766\" (UID: \"7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766\") " Jan 26 17:02:51 crc kubenswrapper[4856]: I0126 17:02:51.790685 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766-utilities\") pod \"7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766\" (UID: \"7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766\") " Jan 26 17:02:51 crc kubenswrapper[4856]: I0126 17:02:51.790811 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766-catalog-content\") pod \"7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766\" (UID: \"7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766\") " Jan 26 17:02:51 crc kubenswrapper[4856]: I0126 17:02:51.792089 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766-utilities" (OuterVolumeSpecName: "utilities") pod "7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766" (UID: "7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:02:51 crc kubenswrapper[4856]: I0126 17:02:51.799858 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766-kube-api-access-s92lp" (OuterVolumeSpecName: "kube-api-access-s92lp") pod "7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766" (UID: "7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766"). InnerVolumeSpecName "kube-api-access-s92lp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:02:51 crc kubenswrapper[4856]: I0126 17:02:51.808284 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-txmdl" Jan 26 17:02:51 crc kubenswrapper[4856]: I0126 17:02:51.891942 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s92lp\" (UniqueName: \"kubernetes.io/projected/7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766-kube-api-access-s92lp\") on node \"crc\" DevicePath \"\"" Jan 26 17:02:51 crc kubenswrapper[4856]: I0126 17:02:51.891978 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:02:51 crc kubenswrapper[4856]: I0126 17:02:51.897617 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wvttb" Jan 26 17:02:51 crc kubenswrapper[4856]: I0126 17:02:51.909963 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766" (UID: "7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:02:51 crc kubenswrapper[4856]: I0126 17:02:51.912647 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kwt4" Jan 26 17:02:51 crc kubenswrapper[4856]: I0126 17:02:51.993011 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40a27476-22b1-4083-990e-66e70ccdaf4c-utilities\") pod \"40a27476-22b1-4083-990e-66e70ccdaf4c\" (UID: \"40a27476-22b1-4083-990e-66e70ccdaf4c\") " Jan 26 17:02:51 crc kubenswrapper[4856]: I0126 17:02:51.993131 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tf49w\" (UniqueName: \"kubernetes.io/projected/40a27476-22b1-4083-990e-66e70ccdaf4c-kube-api-access-tf49w\") pod \"40a27476-22b1-4083-990e-66e70ccdaf4c\" (UID: \"40a27476-22b1-4083-990e-66e70ccdaf4c\") " Jan 26 17:02:51 crc kubenswrapper[4856]: I0126 17:02:51.993171 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40a27476-22b1-4083-990e-66e70ccdaf4c-catalog-content\") pod \"40a27476-22b1-4083-990e-66e70ccdaf4c\" (UID: \"40a27476-22b1-4083-990e-66e70ccdaf4c\") " Jan 26 17:02:51 crc kubenswrapper[4856]: I0126 17:02:51.993423 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:02:51 crc kubenswrapper[4856]: I0126 17:02:51.996221 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40a27476-22b1-4083-990e-66e70ccdaf4c-utilities" (OuterVolumeSpecName: "utilities") pod "40a27476-22b1-4083-990e-66e70ccdaf4c" (UID: "40a27476-22b1-4083-990e-66e70ccdaf4c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.006789 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40a27476-22b1-4083-990e-66e70ccdaf4c-kube-api-access-tf49w" (OuterVolumeSpecName: "kube-api-access-tf49w") pod "40a27476-22b1-4083-990e-66e70ccdaf4c" (UID: "40a27476-22b1-4083-990e-66e70ccdaf4c"). InnerVolumeSpecName "kube-api-access-tf49w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.096395 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8qsf\" (UniqueName: \"kubernetes.io/projected/2d37efbf-d18f-486b-9b43-bc4d181af4ca-kube-api-access-b8qsf\") pod \"2d37efbf-d18f-486b-9b43-bc4d181af4ca\" (UID: \"2d37efbf-d18f-486b-9b43-bc4d181af4ca\") " Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.098083 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6944fc9-b8d7-4013-8702-b5765c410a0b-catalog-content\") pod \"d6944fc9-b8d7-4013-8702-b5765c410a0b\" (UID: \"d6944fc9-b8d7-4013-8702-b5765c410a0b\") " Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.098277 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6944fc9-b8d7-4013-8702-b5765c410a0b-utilities\") pod \"d6944fc9-b8d7-4013-8702-b5765c410a0b\" (UID: \"d6944fc9-b8d7-4013-8702-b5765c410a0b\") " Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.098470 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2d37efbf-d18f-486b-9b43-bc4d181af4ca-marketplace-operator-metrics\") pod \"2d37efbf-d18f-486b-9b43-bc4d181af4ca\" (UID: \"2d37efbf-d18f-486b-9b43-bc4d181af4ca\") " Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.098741 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qzgz\" (UniqueName: \"kubernetes.io/projected/d6944fc9-b8d7-4013-8702-b5765c410a0b-kube-api-access-2qzgz\") pod \"d6944fc9-b8d7-4013-8702-b5765c410a0b\" (UID: \"d6944fc9-b8d7-4013-8702-b5765c410a0b\") " Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.098874 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d37efbf-d18f-486b-9b43-bc4d181af4ca-marketplace-trusted-ca\") pod \"2d37efbf-d18f-486b-9b43-bc4d181af4ca\" (UID: \"2d37efbf-d18f-486b-9b43-bc4d181af4ca\") " Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.304420 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40a27476-22b1-4083-990e-66e70ccdaf4c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "40a27476-22b1-4083-990e-66e70ccdaf4c" (UID: "40a27476-22b1-4083-990e-66e70ccdaf4c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.304696 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40a27476-22b1-4083-990e-66e70ccdaf4c-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.304732 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tf49w\" (UniqueName: \"kubernetes.io/projected/40a27476-22b1-4083-990e-66e70ccdaf4c-kube-api-access-tf49w\") on node \"crc\" DevicePath \"\"" Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.306231 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6944fc9-b8d7-4013-8702-b5765c410a0b-utilities" (OuterVolumeSpecName: "utilities") pod "d6944fc9-b8d7-4013-8702-b5765c410a0b" (UID: "d6944fc9-b8d7-4013-8702-b5765c410a0b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.310367 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d37efbf-d18f-486b-9b43-bc4d181af4ca-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "2d37efbf-d18f-486b-9b43-bc4d181af4ca" (UID: "2d37efbf-d18f-486b-9b43-bc4d181af4ca"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.326710 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d37efbf-d18f-486b-9b43-bc4d181af4ca-kube-api-access-b8qsf" (OuterVolumeSpecName: "kube-api-access-b8qsf") pod "2d37efbf-d18f-486b-9b43-bc4d181af4ca" (UID: "2d37efbf-d18f-486b-9b43-bc4d181af4ca"). InnerVolumeSpecName "kube-api-access-b8qsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.328773 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tdtfh"] Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.336453 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d37efbf-d18f-486b-9b43-bc4d181af4ca-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "2d37efbf-d18f-486b-9b43-bc4d181af4ca" (UID: "2d37efbf-d18f-486b-9b43-bc4d181af4ca"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.340280 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6944fc9-b8d7-4013-8702-b5765c410a0b-kube-api-access-2qzgz" (OuterVolumeSpecName: "kube-api-access-2qzgz") pod "d6944fc9-b8d7-4013-8702-b5765c410a0b" (UID: "d6944fc9-b8d7-4013-8702-b5765c410a0b"). InnerVolumeSpecName "kube-api-access-2qzgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.354558 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6944fc9-b8d7-4013-8702-b5765c410a0b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d6944fc9-b8d7-4013-8702-b5765c410a0b" (UID: "d6944fc9-b8d7-4013-8702-b5765c410a0b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:02:52 crc kubenswrapper[4856]: W0126 17:02:52.372984 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod566ca894_037a_4b73_95d4_a6246c7c851a.slice/crio-9cd1ed3014b88b989fd630041a498b328f9b0617c0cba5a762ce4e78425e361e WatchSource:0}: Error finding container 9cd1ed3014b88b989fd630041a498b328f9b0617c0cba5a762ce4e78425e361e: Status 404 returned error can't find the container with id 9cd1ed3014b88b989fd630041a498b328f9b0617c0cba5a762ce4e78425e361e Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.405769 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8qsf\" (UniqueName: \"kubernetes.io/projected/2d37efbf-d18f-486b-9b43-bc4d181af4ca-kube-api-access-b8qsf\") on node \"crc\" DevicePath \"\"" Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.405881 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6944fc9-b8d7-4013-8702-b5765c410a0b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.405928 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6944fc9-b8d7-4013-8702-b5765c410a0b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.405960 4856 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2d37efbf-d18f-486b-9b43-bc4d181af4ca-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.406142 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qzgz\" (UniqueName: \"kubernetes.io/projected/d6944fc9-b8d7-4013-8702-b5765c410a0b-kube-api-access-2qzgz\") on node \"crc\" DevicePath \"\"" Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.406223 4856 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d37efbf-d18f-486b-9b43-bc4d181af4ca-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.406265 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40a27476-22b1-4083-990e-66e70ccdaf4c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.692477 4856 generic.go:334] "Generic (PLEG): container finished" podID="89cf05de-642b-4574-9f79-45e7a3d4afa3" containerID="ce1db845ee974faa07417a8ed669be7680c5b2f3c82683fabe5144e8c8d7d22c" exitCode=0 Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.692574 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgjjd" event={"ID":"89cf05de-642b-4574-9f79-45e7a3d4afa3","Type":"ContainerDied","Data":"ce1db845ee974faa07417a8ed669be7680c5b2f3c82683fabe5144e8c8d7d22c"} Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.698453 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wvttb" event={"ID":"2d37efbf-d18f-486b-9b43-bc4d181af4ca","Type":"ContainerDied","Data":"fff8ee4c0db342e8c666d6319a47d7101521fb44435e8030d5a5dc565b0b6c44"} Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.698734 4856 scope.go:117] "RemoveContainer" containerID="743ebe09ef635c21a62370a80c15b76e3ff5e7e1801bb955f28ed30f848dcca9" Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.698929 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wvttb" Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.712379 4856 generic.go:334] "Generic (PLEG): container finished" podID="a6086d4b-faeb-4a12-8e6a-2a178dfe374c" containerID="28595b618ba5f672f05780cdefc1a37904c64c77c313fd5d1eade9c6ec61abec" exitCode=0 Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.712429 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n8hp2" event={"ID":"a6086d4b-faeb-4a12-8e6a-2a178dfe374c","Type":"ContainerDied","Data":"28595b618ba5f672f05780cdefc1a37904c64c77c313fd5d1eade9c6ec61abec"} Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.717253 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mqxwf" event={"ID":"9c71e219-35d7-4e1e-a371-3456dfd29e83","Type":"ContainerStarted","Data":"f26b43b06d9a9590dc23d990cf3b883949996d6147e4127f3313a0a700b7a8da"} Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.717447 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mqxwf" podUID="9c71e219-35d7-4e1e-a371-3456dfd29e83" containerName="extract-content" containerID="cri-o://f26b43b06d9a9590dc23d990cf3b883949996d6147e4127f3313a0a700b7a8da" gracePeriod=30 Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.746645 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tdtfh" event={"ID":"566ca894-037a-4b73-95d4-a6246c7c851a","Type":"ContainerStarted","Data":"519d7b33053858d4024a6ccd49792f293a0c0e3d4961a9a11caf19ec554e5298"} Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.746694 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tdtfh" event={"ID":"566ca894-037a-4b73-95d4-a6246c7c851a","Type":"ContainerStarted","Data":"9cd1ed3014b88b989fd630041a498b328f9b0617c0cba5a762ce4e78425e361e"} Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.746909 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wvttb"] Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.747050 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-tdtfh" Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.748075 4856 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tdtfh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.61:8080/healthz\": dial tcp 10.217.0.61:8080: connect: connection refused" start-of-body= Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.748114 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tdtfh" podUID="566ca894-037a-4b73-95d4-a6246c7c851a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.61:8080/healthz\": dial tcp 10.217.0.61:8080: connect: connection refused" Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.750428 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wvttb"] Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.765025 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8bgt" event={"ID":"0d7eb7b8-63ae-493a-850b-0b9f3b42e927","Type":"ContainerStarted","Data":"53b807aa482bf2d95f65ed65fcc51ebdaee0a2490bc9574ce63e9c46227c37e6"} Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.765198 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g8bgt" podUID="0d7eb7b8-63ae-493a-850b-0b9f3b42e927" containerName="registry-server" containerID="cri-o://53b807aa482bf2d95f65ed65fcc51ebdaee0a2490bc9574ce63e9c46227c37e6" gracePeriod=30 Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.772110 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kwt4" event={"ID":"d6944fc9-b8d7-4013-8702-b5765c410a0b","Type":"ContainerDied","Data":"c6a85642ee783cdf59dd26ba744cc42773e760d42354900c16ebdd5e8e9ec111"} Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.772116 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kwt4" Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.772166 4856 scope.go:117] "RemoveContainer" containerID="a6302eb5e39718f049dce88c0f2a26632538c4eb99b7dbea0dca8f7aae8306c7" Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.776858 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-txmdl" Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.776864 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txmdl" event={"ID":"40a27476-22b1-4083-990e-66e70ccdaf4c","Type":"ContainerDied","Data":"894929ba59d66c867404dc7094d1e4c1b977bab79b099140b34c889e7b66ae16"} Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.778789 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-62nhd" Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.779656 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qknj9" podUID="a3fa94fe-e4ad-4171-b853-89878dc61569" containerName="extract-content" containerID="cri-o://f974d02f853250cd240f8efa3661017c6f59e32afaba8f242e28a7789f1e0242" gracePeriod=30 Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.779978 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qknj9" event={"ID":"a3fa94fe-e4ad-4171-b853-89878dc61569","Type":"ContainerStarted","Data":"f974d02f853250cd240f8efa3661017c6f59e32afaba8f242e28a7789f1e0242"} Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.797756 4856 scope.go:117] "RemoveContainer" containerID="25c95d58185d3429ff473dbc3a21342905624b31dd17e96573eb140be4c2402c" Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.814121 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-tdtfh" podStartSLOduration=8.814093849 podStartE2EDuration="8.814093849s" podCreationTimestamp="2026-01-26 17:02:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:02:52.80995737 +0000 UTC m=+268.763211361" watchObservedRunningTime="2026-01-26 17:02:52.814093849 +0000 UTC m=+268.767347830" Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.847307 4856 scope.go:117] "RemoveContainer" containerID="d4ffeb43e14865bfef28f884de6e5301087c2d9158d7a77b0c10a8dfec7c7ce2" Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.849215 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g8bgt" podStartSLOduration=3.358497503 podStartE2EDuration="1m33.849199617s" podCreationTimestamp="2026-01-26 17:01:19 +0000 UTC" firstStartedPulling="2026-01-26 17:01:21.09655267 +0000 UTC m=+177.049806651" lastFinishedPulling="2026-01-26 17:02:51.587254784 +0000 UTC m=+267.540508765" observedRunningTime="2026-01-26 17:02:52.844690517 +0000 UTC m=+268.797944518" watchObservedRunningTime="2026-01-26 17:02:52.849199617 +0000 UTC m=+268.802453598" Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.891211 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-62nhd"] Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.893582 4856 scope.go:117] "RemoveContainer" containerID="d2eb8e794ba046c4faf481951340700b775e721a354e0ab5bea576c03e396810" Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.898336 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-62nhd"] Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.914429 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-txmdl"] Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.917617 4856 scope.go:117] "RemoveContainer" containerID="4be8cc185ffcb38acac0516b9ba74f7fa439552b3ff463ccf813f91341bce48c" Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.918418 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-txmdl"] Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.929939 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4kwt4"] Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.934237 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4kwt4"] Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.946507 4856 scope.go:117] "RemoveContainer" containerID="3ec09320bb48de5d8b6709469f0f84953408cf650f51d872373c21616d43f0de" Jan 26 17:02:52 crc kubenswrapper[4856]: I0126 17:02:52.960640 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qgjjd" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.116295 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtxjz\" (UniqueName: \"kubernetes.io/projected/89cf05de-642b-4574-9f79-45e7a3d4afa3-kube-api-access-gtxjz\") pod \"89cf05de-642b-4574-9f79-45e7a3d4afa3\" (UID: \"89cf05de-642b-4574-9f79-45e7a3d4afa3\") " Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.116359 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89cf05de-642b-4574-9f79-45e7a3d4afa3-utilities\") pod \"89cf05de-642b-4574-9f79-45e7a3d4afa3\" (UID: \"89cf05de-642b-4574-9f79-45e7a3d4afa3\") " Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.116457 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89cf05de-642b-4574-9f79-45e7a3d4afa3-catalog-content\") pod \"89cf05de-642b-4574-9f79-45e7a3d4afa3\" (UID: \"89cf05de-642b-4574-9f79-45e7a3d4afa3\") " Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.117989 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89cf05de-642b-4574-9f79-45e7a3d4afa3-utilities" (OuterVolumeSpecName: "utilities") pod "89cf05de-642b-4574-9f79-45e7a3d4afa3" (UID: "89cf05de-642b-4574-9f79-45e7a3d4afa3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.122574 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89cf05de-642b-4574-9f79-45e7a3d4afa3-kube-api-access-gtxjz" (OuterVolumeSpecName: "kube-api-access-gtxjz") pod "89cf05de-642b-4574-9f79-45e7a3d4afa3" (UID: "89cf05de-642b-4574-9f79-45e7a3d4afa3"). InnerVolumeSpecName "kube-api-access-gtxjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.148896 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n8hp2" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.189782 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89cf05de-642b-4574-9f79-45e7a3d4afa3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "89cf05de-642b-4574-9f79-45e7a3d4afa3" (UID: "89cf05de-642b-4574-9f79-45e7a3d4afa3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.217701 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtxjz\" (UniqueName: \"kubernetes.io/projected/89cf05de-642b-4574-9f79-45e7a3d4afa3-kube-api-access-gtxjz\") on node \"crc\" DevicePath \"\"" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.217780 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89cf05de-642b-4574-9f79-45e7a3d4afa3-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.217795 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89cf05de-642b-4574-9f79-45e7a3d4afa3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.283249 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mqxwf" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.318826 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7mvx\" (UniqueName: \"kubernetes.io/projected/a6086d4b-faeb-4a12-8e6a-2a178dfe374c-kube-api-access-x7mvx\") pod \"a6086d4b-faeb-4a12-8e6a-2a178dfe374c\" (UID: \"a6086d4b-faeb-4a12-8e6a-2a178dfe374c\") " Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.318938 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6086d4b-faeb-4a12-8e6a-2a178dfe374c-utilities\") pod \"a6086d4b-faeb-4a12-8e6a-2a178dfe374c\" (UID: \"a6086d4b-faeb-4a12-8e6a-2a178dfe374c\") " Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.319015 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6086d4b-faeb-4a12-8e6a-2a178dfe374c-catalog-content\") pod \"a6086d4b-faeb-4a12-8e6a-2a178dfe374c\" (UID: \"a6086d4b-faeb-4a12-8e6a-2a178dfe374c\") " Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.319933 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6086d4b-faeb-4a12-8e6a-2a178dfe374c-utilities" (OuterVolumeSpecName: "utilities") pod "a6086d4b-faeb-4a12-8e6a-2a178dfe374c" (UID: "a6086d4b-faeb-4a12-8e6a-2a178dfe374c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.322052 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6086d4b-faeb-4a12-8e6a-2a178dfe374c-kube-api-access-x7mvx" (OuterVolumeSpecName: "kube-api-access-x7mvx") pod "a6086d4b-faeb-4a12-8e6a-2a178dfe374c" (UID: "a6086d4b-faeb-4a12-8e6a-2a178dfe374c"). InnerVolumeSpecName "kube-api-access-x7mvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.347025 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g8bgt_0d7eb7b8-63ae-493a-850b-0b9f3b42e927/registry-server/0.log" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.347828 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g8bgt" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.380367 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6086d4b-faeb-4a12-8e6a-2a178dfe374c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a6086d4b-faeb-4a12-8e6a-2a178dfe374c" (UID: "a6086d4b-faeb-4a12-8e6a-2a178dfe374c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.410392 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d37efbf-d18f-486b-9b43-bc4d181af4ca" path="/var/lib/kubelet/pods/2d37efbf-d18f-486b-9b43-bc4d181af4ca/volumes" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.411026 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40a27476-22b1-4083-990e-66e70ccdaf4c" path="/var/lib/kubelet/pods/40a27476-22b1-4083-990e-66e70ccdaf4c/volumes" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.411788 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766" path="/var/lib/kubelet/pods/7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766/volumes" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.413016 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6944fc9-b8d7-4013-8702-b5765c410a0b" path="/var/lib/kubelet/pods/d6944fc9-b8d7-4013-8702-b5765c410a0b/volumes" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.419602 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6swzw\" (UniqueName: \"kubernetes.io/projected/9c71e219-35d7-4e1e-a371-3456dfd29e83-kube-api-access-6swzw\") pod \"9c71e219-35d7-4e1e-a371-3456dfd29e83\" (UID: \"9c71e219-35d7-4e1e-a371-3456dfd29e83\") " Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.419690 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c71e219-35d7-4e1e-a371-3456dfd29e83-catalog-content\") pod \"9c71e219-35d7-4e1e-a371-3456dfd29e83\" (UID: \"9c71e219-35d7-4e1e-a371-3456dfd29e83\") " Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.419786 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c71e219-35d7-4e1e-a371-3456dfd29e83-utilities\") pod \"9c71e219-35d7-4e1e-a371-3456dfd29e83\" (UID: \"9c71e219-35d7-4e1e-a371-3456dfd29e83\") " Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.420071 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7mvx\" (UniqueName: \"kubernetes.io/projected/a6086d4b-faeb-4a12-8e6a-2a178dfe374c-kube-api-access-x7mvx\") on node \"crc\" DevicePath \"\"" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.420086 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6086d4b-faeb-4a12-8e6a-2a178dfe374c-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.420096 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6086d4b-faeb-4a12-8e6a-2a178dfe374c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.420895 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c71e219-35d7-4e1e-a371-3456dfd29e83-utilities" (OuterVolumeSpecName: "utilities") pod "9c71e219-35d7-4e1e-a371-3456dfd29e83" (UID: "9c71e219-35d7-4e1e-a371-3456dfd29e83"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.423387 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c71e219-35d7-4e1e-a371-3456dfd29e83-kube-api-access-6swzw" (OuterVolumeSpecName: "kube-api-access-6swzw") pod "9c71e219-35d7-4e1e-a371-3456dfd29e83" (UID: "9c71e219-35d7-4e1e-a371-3456dfd29e83"). InnerVolumeSpecName "kube-api-access-6swzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.521055 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d7eb7b8-63ae-493a-850b-0b9f3b42e927-catalog-content\") pod \"0d7eb7b8-63ae-493a-850b-0b9f3b42e927\" (UID: \"0d7eb7b8-63ae-493a-850b-0b9f3b42e927\") " Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.521434 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d7eb7b8-63ae-493a-850b-0b9f3b42e927-utilities\") pod \"0d7eb7b8-63ae-493a-850b-0b9f3b42e927\" (UID: \"0d7eb7b8-63ae-493a-850b-0b9f3b42e927\") " Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.521481 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcbvc\" (UniqueName: \"kubernetes.io/projected/0d7eb7b8-63ae-493a-850b-0b9f3b42e927-kube-api-access-dcbvc\") pod \"0d7eb7b8-63ae-493a-850b-0b9f3b42e927\" (UID: \"0d7eb7b8-63ae-493a-850b-0b9f3b42e927\") " Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.522117 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6swzw\" (UniqueName: \"kubernetes.io/projected/9c71e219-35d7-4e1e-a371-3456dfd29e83-kube-api-access-6swzw\") on node \"crc\" DevicePath \"\"" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.522163 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c71e219-35d7-4e1e-a371-3456dfd29e83-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.522423 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d7eb7b8-63ae-493a-850b-0b9f3b42e927-utilities" (OuterVolumeSpecName: "utilities") pod "0d7eb7b8-63ae-493a-850b-0b9f3b42e927" (UID: "0d7eb7b8-63ae-493a-850b-0b9f3b42e927"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.524365 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d7eb7b8-63ae-493a-850b-0b9f3b42e927-kube-api-access-dcbvc" (OuterVolumeSpecName: "kube-api-access-dcbvc") pod "0d7eb7b8-63ae-493a-850b-0b9f3b42e927" (UID: "0d7eb7b8-63ae-493a-850b-0b9f3b42e927"). InnerVolumeSpecName "kube-api-access-dcbvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.546756 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d7eb7b8-63ae-493a-850b-0b9f3b42e927-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d7eb7b8-63ae-493a-850b-0b9f3b42e927" (UID: "0d7eb7b8-63ae-493a-850b-0b9f3b42e927"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.600112 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c71e219-35d7-4e1e-a371-3456dfd29e83-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9c71e219-35d7-4e1e-a371-3456dfd29e83" (UID: "9c71e219-35d7-4e1e-a371-3456dfd29e83"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.623111 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d7eb7b8-63ae-493a-850b-0b9f3b42e927-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.623159 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c71e219-35d7-4e1e-a371-3456dfd29e83-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.623172 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcbvc\" (UniqueName: \"kubernetes.io/projected/0d7eb7b8-63ae-493a-850b-0b9f3b42e927-kube-api-access-dcbvc\") on node \"crc\" DevicePath \"\"" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.623182 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d7eb7b8-63ae-493a-850b-0b9f3b42e927-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.642860 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qknj9_a3fa94fe-e4ad-4171-b853-89878dc61569/extract-content/0.log" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.643817 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qknj9" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.785661 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g8bgt_0d7eb7b8-63ae-493a-850b-0b9f3b42e927/registry-server/0.log" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.786282 4856 generic.go:334] "Generic (PLEG): container finished" podID="0d7eb7b8-63ae-493a-850b-0b9f3b42e927" containerID="53b807aa482bf2d95f65ed65fcc51ebdaee0a2490bc9574ce63e9c46227c37e6" exitCode=1 Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.786350 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8bgt" event={"ID":"0d7eb7b8-63ae-493a-850b-0b9f3b42e927","Type":"ContainerDied","Data":"53b807aa482bf2d95f65ed65fcc51ebdaee0a2490bc9574ce63e9c46227c37e6"} Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.786382 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8bgt" event={"ID":"0d7eb7b8-63ae-493a-850b-0b9f3b42e927","Type":"ContainerDied","Data":"eca9c93c5c35ce3c6c300c833124d2e0c4c40f4feaf4a45bd12b4eecdb2f116c"} Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.786403 4856 scope.go:117] "RemoveContainer" containerID="53b807aa482bf2d95f65ed65fcc51ebdaee0a2490bc9574ce63e9c46227c37e6" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.786557 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g8bgt" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.792067 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n8hp2" event={"ID":"a6086d4b-faeb-4a12-8e6a-2a178dfe374c","Type":"ContainerDied","Data":"61bc611402534dad5a09a8edd4e25038026dc1769890ea9d2407a69eb9c888af"} Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.792629 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n8hp2" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.794962 4856 generic.go:334] "Generic (PLEG): container finished" podID="9c71e219-35d7-4e1e-a371-3456dfd29e83" containerID="f26b43b06d9a9590dc23d990cf3b883949996d6147e4127f3313a0a700b7a8da" exitCode=0 Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.795036 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mqxwf" event={"ID":"9c71e219-35d7-4e1e-a371-3456dfd29e83","Type":"ContainerDied","Data":"f26b43b06d9a9590dc23d990cf3b883949996d6147e4127f3313a0a700b7a8da"} Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.795064 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mqxwf" event={"ID":"9c71e219-35d7-4e1e-a371-3456dfd29e83","Type":"ContainerDied","Data":"d09c4604a24ed1fd63afc114569ecaa6c0c08542e351c04817bb0f8a62c19b49"} Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.795151 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mqxwf" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.802686 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qknj9_a3fa94fe-e4ad-4171-b853-89878dc61569/extract-content/0.log" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.803205 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qknj9" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.803309 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qknj9" event={"ID":"a3fa94fe-e4ad-4171-b853-89878dc61569","Type":"ContainerDied","Data":"f974d02f853250cd240f8efa3661017c6f59e32afaba8f242e28a7789f1e0242"} Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.803147 4856 generic.go:334] "Generic (PLEG): container finished" podID="a3fa94fe-e4ad-4171-b853-89878dc61569" containerID="f974d02f853250cd240f8efa3661017c6f59e32afaba8f242e28a7789f1e0242" exitCode=2 Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.803768 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qknj9" event={"ID":"a3fa94fe-e4ad-4171-b853-89878dc61569","Type":"ContainerDied","Data":"083b0b52d78f857657f62965a6b3636eba0ff933ac74b23de919043206cf9046"} Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.805568 4856 scope.go:117] "RemoveContainer" containerID="3926e8ab5a46c920f3ea8cad2d006d4f4059fc6b8c475a7f6f3a22211a28d019" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.807405 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgjjd" event={"ID":"89cf05de-642b-4574-9f79-45e7a3d4afa3","Type":"ContainerDied","Data":"e31b957fac8983059a89e5a7867c6294be7613d1c35b810e6c7face168eea509"} Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.807453 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qgjjd" Jan 26 17:02:53 crc kubenswrapper[4856]: I0126 17:02:53.811988 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-tdtfh" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.029829 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3fa94fe-e4ad-4171-b853-89878dc61569-catalog-content\") pod \"a3fa94fe-e4ad-4171-b853-89878dc61569\" (UID: \"a3fa94fe-e4ad-4171-b853-89878dc61569\") " Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.029844 4856 scope.go:117] "RemoveContainer" containerID="a9fe692a78995f7dad7ea556edacc772eb429ab92938195725add9a17bbe9e7c" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.029900 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvpzb\" (UniqueName: \"kubernetes.io/projected/a3fa94fe-e4ad-4171-b853-89878dc61569-kube-api-access-wvpzb\") pod \"a3fa94fe-e4ad-4171-b853-89878dc61569\" (UID: \"a3fa94fe-e4ad-4171-b853-89878dc61569\") " Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.029957 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3fa94fe-e4ad-4171-b853-89878dc61569-utilities\") pod \"a3fa94fe-e4ad-4171-b853-89878dc61569\" (UID: \"a3fa94fe-e4ad-4171-b853-89878dc61569\") " Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.030734 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3fa94fe-e4ad-4171-b853-89878dc61569-utilities" (OuterVolumeSpecName: "utilities") pod "a3fa94fe-e4ad-4171-b853-89878dc61569" (UID: "a3fa94fe-e4ad-4171-b853-89878dc61569"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.030824 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3fa94fe-e4ad-4171-b853-89878dc61569-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.050719 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3fa94fe-e4ad-4171-b853-89878dc61569-kube-api-access-wvpzb" (OuterVolumeSpecName: "kube-api-access-wvpzb") pod "a3fa94fe-e4ad-4171-b853-89878dc61569" (UID: "a3fa94fe-e4ad-4171-b853-89878dc61569"). InnerVolumeSpecName "kube-api-access-wvpzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.058927 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n8hp2"] Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.062981 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n8hp2"] Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.078458 4856 scope.go:117] "RemoveContainer" containerID="53b807aa482bf2d95f65ed65fcc51ebdaee0a2490bc9574ce63e9c46227c37e6" Jan 26 17:02:54 crc kubenswrapper[4856]: E0126 17:02:54.079104 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53b807aa482bf2d95f65ed65fcc51ebdaee0a2490bc9574ce63e9c46227c37e6\": container with ID starting with 53b807aa482bf2d95f65ed65fcc51ebdaee0a2490bc9574ce63e9c46227c37e6 not found: ID does not exist" containerID="53b807aa482bf2d95f65ed65fcc51ebdaee0a2490bc9574ce63e9c46227c37e6" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.079146 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53b807aa482bf2d95f65ed65fcc51ebdaee0a2490bc9574ce63e9c46227c37e6"} err="failed to get container status \"53b807aa482bf2d95f65ed65fcc51ebdaee0a2490bc9574ce63e9c46227c37e6\": rpc error: code = NotFound desc = could not find container \"53b807aa482bf2d95f65ed65fcc51ebdaee0a2490bc9574ce63e9c46227c37e6\": container with ID starting with 53b807aa482bf2d95f65ed65fcc51ebdaee0a2490bc9574ce63e9c46227c37e6 not found: ID does not exist" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.079178 4856 scope.go:117] "RemoveContainer" containerID="3926e8ab5a46c920f3ea8cad2d006d4f4059fc6b8c475a7f6f3a22211a28d019" Jan 26 17:02:54 crc kubenswrapper[4856]: E0126 17:02:54.079601 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3926e8ab5a46c920f3ea8cad2d006d4f4059fc6b8c475a7f6f3a22211a28d019\": container with ID starting with 3926e8ab5a46c920f3ea8cad2d006d4f4059fc6b8c475a7f6f3a22211a28d019 not found: ID does not exist" containerID="3926e8ab5a46c920f3ea8cad2d006d4f4059fc6b8c475a7f6f3a22211a28d019" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.079624 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3926e8ab5a46c920f3ea8cad2d006d4f4059fc6b8c475a7f6f3a22211a28d019"} err="failed to get container status \"3926e8ab5a46c920f3ea8cad2d006d4f4059fc6b8c475a7f6f3a22211a28d019\": rpc error: code = NotFound desc = could not find container \"3926e8ab5a46c920f3ea8cad2d006d4f4059fc6b8c475a7f6f3a22211a28d019\": container with ID starting with 3926e8ab5a46c920f3ea8cad2d006d4f4059fc6b8c475a7f6f3a22211a28d019 not found: ID does not exist" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.079636 4856 scope.go:117] "RemoveContainer" containerID="a9fe692a78995f7dad7ea556edacc772eb429ab92938195725add9a17bbe9e7c" Jan 26 17:02:54 crc kubenswrapper[4856]: E0126 17:02:54.079854 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9fe692a78995f7dad7ea556edacc772eb429ab92938195725add9a17bbe9e7c\": container with ID starting with a9fe692a78995f7dad7ea556edacc772eb429ab92938195725add9a17bbe9e7c not found: ID does not exist" containerID="a9fe692a78995f7dad7ea556edacc772eb429ab92938195725add9a17bbe9e7c" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.079875 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9fe692a78995f7dad7ea556edacc772eb429ab92938195725add9a17bbe9e7c"} err="failed to get container status \"a9fe692a78995f7dad7ea556edacc772eb429ab92938195725add9a17bbe9e7c\": rpc error: code = NotFound desc = could not find container \"a9fe692a78995f7dad7ea556edacc772eb429ab92938195725add9a17bbe9e7c\": container with ID starting with a9fe692a78995f7dad7ea556edacc772eb429ab92938195725add9a17bbe9e7c not found: ID does not exist" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.079887 4856 scope.go:117] "RemoveContainer" containerID="28595b618ba5f672f05780cdefc1a37904c64c77c313fd5d1eade9c6ec61abec" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.091982 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mqxwf"] Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.107585 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mqxwf"] Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.111328 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-dnvfq" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.113710 4856 scope.go:117] "RemoveContainer" containerID="5638f22e046bc8f28ee2834fa7820e942af58e17d4efe952168ca98e63b3fa12" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.116379 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g8bgt"] Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.120860 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-g8bgt"] Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.131893 4856 scope.go:117] "RemoveContainer" containerID="f26b43b06d9a9590dc23d990cf3b883949996d6147e4127f3313a0a700b7a8da" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.135427 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvpzb\" (UniqueName: \"kubernetes.io/projected/a3fa94fe-e4ad-4171-b853-89878dc61569-kube-api-access-wvpzb\") on node \"crc\" DevicePath \"\"" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.141303 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qgjjd"] Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.144288 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qgjjd"] Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.169135 4856 scope.go:117] "RemoveContainer" containerID="e6e9fc1c7474ee1cf14a50a96e79036f97d946d338d8a18c3434197cbd0438a8" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.190521 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3fa94fe-e4ad-4171-b853-89878dc61569-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a3fa94fe-e4ad-4171-b853-89878dc61569" (UID: "a3fa94fe-e4ad-4171-b853-89878dc61569"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.212508 4856 scope.go:117] "RemoveContainer" containerID="f26b43b06d9a9590dc23d990cf3b883949996d6147e4127f3313a0a700b7a8da" Jan 26 17:02:54 crc kubenswrapper[4856]: E0126 17:02:54.213905 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f26b43b06d9a9590dc23d990cf3b883949996d6147e4127f3313a0a700b7a8da\": container with ID starting with f26b43b06d9a9590dc23d990cf3b883949996d6147e4127f3313a0a700b7a8da not found: ID does not exist" containerID="f26b43b06d9a9590dc23d990cf3b883949996d6147e4127f3313a0a700b7a8da" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.213970 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f26b43b06d9a9590dc23d990cf3b883949996d6147e4127f3313a0a700b7a8da"} err="failed to get container status \"f26b43b06d9a9590dc23d990cf3b883949996d6147e4127f3313a0a700b7a8da\": rpc error: code = NotFound desc = could not find container \"f26b43b06d9a9590dc23d990cf3b883949996d6147e4127f3313a0a700b7a8da\": container with ID starting with f26b43b06d9a9590dc23d990cf3b883949996d6147e4127f3313a0a700b7a8da not found: ID does not exist" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.214012 4856 scope.go:117] "RemoveContainer" containerID="e6e9fc1c7474ee1cf14a50a96e79036f97d946d338d8a18c3434197cbd0438a8" Jan 26 17:02:54 crc kubenswrapper[4856]: E0126 17:02:54.214385 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6e9fc1c7474ee1cf14a50a96e79036f97d946d338d8a18c3434197cbd0438a8\": container with ID starting with e6e9fc1c7474ee1cf14a50a96e79036f97d946d338d8a18c3434197cbd0438a8 not found: ID does not exist" containerID="e6e9fc1c7474ee1cf14a50a96e79036f97d946d338d8a18c3434197cbd0438a8" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.214444 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6e9fc1c7474ee1cf14a50a96e79036f97d946d338d8a18c3434197cbd0438a8"} err="failed to get container status \"e6e9fc1c7474ee1cf14a50a96e79036f97d946d338d8a18c3434197cbd0438a8\": rpc error: code = NotFound desc = could not find container \"e6e9fc1c7474ee1cf14a50a96e79036f97d946d338d8a18c3434197cbd0438a8\": container with ID starting with e6e9fc1c7474ee1cf14a50a96e79036f97d946d338d8a18c3434197cbd0438a8 not found: ID does not exist" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.214480 4856 scope.go:117] "RemoveContainer" containerID="f974d02f853250cd240f8efa3661017c6f59e32afaba8f242e28a7789f1e0242" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.230674 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wxbdh"] Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.236565 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3fa94fe-e4ad-4171-b853-89878dc61569-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.260824 4856 scope.go:117] "RemoveContainer" containerID="6c718aeedef34f07c2686370f8f78fe4060881e116396cc02bb806370cffdb47" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.281908 4856 scope.go:117] "RemoveContainer" containerID="f974d02f853250cd240f8efa3661017c6f59e32afaba8f242e28a7789f1e0242" Jan 26 17:02:54 crc kubenswrapper[4856]: E0126 17:02:54.282272 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f974d02f853250cd240f8efa3661017c6f59e32afaba8f242e28a7789f1e0242\": container with ID starting with f974d02f853250cd240f8efa3661017c6f59e32afaba8f242e28a7789f1e0242 not found: ID does not exist" containerID="f974d02f853250cd240f8efa3661017c6f59e32afaba8f242e28a7789f1e0242" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.282314 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f974d02f853250cd240f8efa3661017c6f59e32afaba8f242e28a7789f1e0242"} err="failed to get container status \"f974d02f853250cd240f8efa3661017c6f59e32afaba8f242e28a7789f1e0242\": rpc error: code = NotFound desc = could not find container \"f974d02f853250cd240f8efa3661017c6f59e32afaba8f242e28a7789f1e0242\": container with ID starting with f974d02f853250cd240f8efa3661017c6f59e32afaba8f242e28a7789f1e0242 not found: ID does not exist" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.282342 4856 scope.go:117] "RemoveContainer" containerID="6c718aeedef34f07c2686370f8f78fe4060881e116396cc02bb806370cffdb47" Jan 26 17:02:54 crc kubenswrapper[4856]: E0126 17:02:54.282987 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c718aeedef34f07c2686370f8f78fe4060881e116396cc02bb806370cffdb47\": container with ID starting with 6c718aeedef34f07c2686370f8f78fe4060881e116396cc02bb806370cffdb47 not found: ID does not exist" containerID="6c718aeedef34f07c2686370f8f78fe4060881e116396cc02bb806370cffdb47" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.283005 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c718aeedef34f07c2686370f8f78fe4060881e116396cc02bb806370cffdb47"} err="failed to get container status \"6c718aeedef34f07c2686370f8f78fe4060881e116396cc02bb806370cffdb47\": rpc error: code = NotFound desc = could not find container \"6c718aeedef34f07c2686370f8f78fe4060881e116396cc02bb806370cffdb47\": container with ID starting with 6c718aeedef34f07c2686370f8f78fe4060881e116396cc02bb806370cffdb47 not found: ID does not exist" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.283018 4856 scope.go:117] "RemoveContainer" containerID="ce1db845ee974faa07417a8ed669be7680c5b2f3c82683fabe5144e8c8d7d22c" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.301892 4856 scope.go:117] "RemoveContainer" containerID="de3e1fd7d5b6adab2150705e57df43577251e5278edb52956bb11f5539b1538a" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.373981 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bxhpt"] Jan 26 17:02:54 crc kubenswrapper[4856]: E0126 17:02:54.374261 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6944fc9-b8d7-4013-8702-b5765c410a0b" containerName="extract-utilities" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.374282 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6944fc9-b8d7-4013-8702-b5765c410a0b" containerName="extract-utilities" Jan 26 17:02:54 crc kubenswrapper[4856]: E0126 17:02:54.374295 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c71e219-35d7-4e1e-a371-3456dfd29e83" containerName="extract-content" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.374301 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c71e219-35d7-4e1e-a371-3456dfd29e83" containerName="extract-content" Jan 26 17:02:54 crc kubenswrapper[4856]: E0126 17:02:54.374312 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6944fc9-b8d7-4013-8702-b5765c410a0b" containerName="extract-content" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.374319 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6944fc9-b8d7-4013-8702-b5765c410a0b" containerName="extract-content" Jan 26 17:02:54 crc kubenswrapper[4856]: E0126 17:02:54.374326 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40a27476-22b1-4083-990e-66e70ccdaf4c" containerName="extract-content" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.374333 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="40a27476-22b1-4083-990e-66e70ccdaf4c" containerName="extract-content" Jan 26 17:02:54 crc kubenswrapper[4856]: E0126 17:02:54.374342 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40a27476-22b1-4083-990e-66e70ccdaf4c" containerName="registry-server" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.374347 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="40a27476-22b1-4083-990e-66e70ccdaf4c" containerName="registry-server" Jan 26 17:02:54 crc kubenswrapper[4856]: E0126 17:02:54.374355 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d37efbf-d18f-486b-9b43-bc4d181af4ca" containerName="marketplace-operator" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.374361 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d37efbf-d18f-486b-9b43-bc4d181af4ca" containerName="marketplace-operator" Jan 26 17:02:54 crc kubenswrapper[4856]: E0126 17:02:54.374367 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40a27476-22b1-4083-990e-66e70ccdaf4c" containerName="extract-utilities" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.374373 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="40a27476-22b1-4083-990e-66e70ccdaf4c" containerName="extract-utilities" Jan 26 17:02:54 crc kubenswrapper[4856]: E0126 17:02:54.374382 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3fa94fe-e4ad-4171-b853-89878dc61569" containerName="extract-utilities" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.374387 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3fa94fe-e4ad-4171-b853-89878dc61569" containerName="extract-utilities" Jan 26 17:02:54 crc kubenswrapper[4856]: E0126 17:02:54.374394 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6944fc9-b8d7-4013-8702-b5765c410a0b" containerName="registry-server" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.374399 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6944fc9-b8d7-4013-8702-b5765c410a0b" containerName="registry-server" Jan 26 17:02:54 crc kubenswrapper[4856]: E0126 17:02:54.374405 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766" containerName="extract-utilities" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.374413 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766" containerName="extract-utilities" Jan 26 17:02:54 crc kubenswrapper[4856]: E0126 17:02:54.374421 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6086d4b-faeb-4a12-8e6a-2a178dfe374c" containerName="extract-utilities" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.374427 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6086d4b-faeb-4a12-8e6a-2a178dfe374c" containerName="extract-utilities" Jan 26 17:02:54 crc kubenswrapper[4856]: E0126 17:02:54.374434 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c71e219-35d7-4e1e-a371-3456dfd29e83" containerName="extract-utilities" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.374440 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c71e219-35d7-4e1e-a371-3456dfd29e83" containerName="extract-utilities" Jan 26 17:02:54 crc kubenswrapper[4856]: E0126 17:02:54.374448 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766" containerName="registry-server" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.374454 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766" containerName="registry-server" Jan 26 17:02:54 crc kubenswrapper[4856]: E0126 17:02:54.374463 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89cf05de-642b-4574-9f79-45e7a3d4afa3" containerName="extract-utilities" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.374469 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="89cf05de-642b-4574-9f79-45e7a3d4afa3" containerName="extract-utilities" Jan 26 17:02:54 crc kubenswrapper[4856]: E0126 17:02:54.374477 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89cf05de-642b-4574-9f79-45e7a3d4afa3" containerName="extract-content" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.374482 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="89cf05de-642b-4574-9f79-45e7a3d4afa3" containerName="extract-content" Jan 26 17:02:54 crc kubenswrapper[4856]: E0126 17:02:54.374490 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6086d4b-faeb-4a12-8e6a-2a178dfe374c" containerName="extract-content" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.374495 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6086d4b-faeb-4a12-8e6a-2a178dfe374c" containerName="extract-content" Jan 26 17:02:54 crc kubenswrapper[4856]: E0126 17:02:54.374504 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d7eb7b8-63ae-493a-850b-0b9f3b42e927" containerName="extract-content" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.374509 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d7eb7b8-63ae-493a-850b-0b9f3b42e927" containerName="extract-content" Jan 26 17:02:54 crc kubenswrapper[4856]: E0126 17:02:54.374517 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3fa94fe-e4ad-4171-b853-89878dc61569" containerName="extract-content" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.374540 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3fa94fe-e4ad-4171-b853-89878dc61569" containerName="extract-content" Jan 26 17:02:54 crc kubenswrapper[4856]: E0126 17:02:54.374549 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d7eb7b8-63ae-493a-850b-0b9f3b42e927" containerName="registry-server" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.374554 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d7eb7b8-63ae-493a-850b-0b9f3b42e927" containerName="registry-server" Jan 26 17:02:54 crc kubenswrapper[4856]: E0126 17:02:54.374563 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d7eb7b8-63ae-493a-850b-0b9f3b42e927" containerName="extract-utilities" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.374570 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d7eb7b8-63ae-493a-850b-0b9f3b42e927" containerName="extract-utilities" Jan 26 17:02:54 crc kubenswrapper[4856]: E0126 17:02:54.374579 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766" containerName="extract-content" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.374585 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766" containerName="extract-content" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.374700 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6944fc9-b8d7-4013-8702-b5765c410a0b" containerName="registry-server" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.374711 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3fa94fe-e4ad-4171-b853-89878dc61569" containerName="extract-content" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.374719 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6086d4b-faeb-4a12-8e6a-2a178dfe374c" containerName="extract-content" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.374728 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fd0eddb-0f0f-4a37-b0b8-1d1b870a0766" containerName="registry-server" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.374734 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d7eb7b8-63ae-493a-850b-0b9f3b42e927" containerName="registry-server" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.374744 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d37efbf-d18f-486b-9b43-bc4d181af4ca" containerName="marketplace-operator" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.374750 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c71e219-35d7-4e1e-a371-3456dfd29e83" containerName="extract-content" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.374756 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="40a27476-22b1-4083-990e-66e70ccdaf4c" containerName="registry-server" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.374764 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="89cf05de-642b-4574-9f79-45e7a3d4afa3" containerName="extract-content" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.375680 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bxhpt" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.380319 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.387424 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bxhpt"] Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.445972 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qknj9"] Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.446490 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qknj9"] Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.540354 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f34c6a8-6023-480c-a25e-46f8c4f3766b-catalog-content\") pod \"certified-operators-bxhpt\" (UID: \"5f34c6a8-6023-480c-a25e-46f8c4f3766b\") " pod="openshift-marketplace/certified-operators-bxhpt" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.540437 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crdtn\" (UniqueName: \"kubernetes.io/projected/5f34c6a8-6023-480c-a25e-46f8c4f3766b-kube-api-access-crdtn\") pod \"certified-operators-bxhpt\" (UID: \"5f34c6a8-6023-480c-a25e-46f8c4f3766b\") " pod="openshift-marketplace/certified-operators-bxhpt" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.540868 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f34c6a8-6023-480c-a25e-46f8c4f3766b-utilities\") pod \"certified-operators-bxhpt\" (UID: \"5f34c6a8-6023-480c-a25e-46f8c4f3766b\") " pod="openshift-marketplace/certified-operators-bxhpt" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.642311 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f34c6a8-6023-480c-a25e-46f8c4f3766b-utilities\") pod \"certified-operators-bxhpt\" (UID: \"5f34c6a8-6023-480c-a25e-46f8c4f3766b\") " pod="openshift-marketplace/certified-operators-bxhpt" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.642384 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f34c6a8-6023-480c-a25e-46f8c4f3766b-catalog-content\") pod \"certified-operators-bxhpt\" (UID: \"5f34c6a8-6023-480c-a25e-46f8c4f3766b\") " pod="openshift-marketplace/certified-operators-bxhpt" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.642432 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crdtn\" (UniqueName: \"kubernetes.io/projected/5f34c6a8-6023-480c-a25e-46f8c4f3766b-kube-api-access-crdtn\") pod \"certified-operators-bxhpt\" (UID: \"5f34c6a8-6023-480c-a25e-46f8c4f3766b\") " pod="openshift-marketplace/certified-operators-bxhpt" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.643261 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f34c6a8-6023-480c-a25e-46f8c4f3766b-catalog-content\") pod \"certified-operators-bxhpt\" (UID: \"5f34c6a8-6023-480c-a25e-46f8c4f3766b\") " pod="openshift-marketplace/certified-operators-bxhpt" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.643267 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f34c6a8-6023-480c-a25e-46f8c4f3766b-utilities\") pod \"certified-operators-bxhpt\" (UID: \"5f34c6a8-6023-480c-a25e-46f8c4f3766b\") " pod="openshift-marketplace/certified-operators-bxhpt" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.977000 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crdtn\" (UniqueName: \"kubernetes.io/projected/5f34c6a8-6023-480c-a25e-46f8c4f3766b-kube-api-access-crdtn\") pod \"certified-operators-bxhpt\" (UID: \"5f34c6a8-6023-480c-a25e-46f8c4f3766b\") " pod="openshift-marketplace/certified-operators-bxhpt" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.987926 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lfhpz"] Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.989105 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lfhpz" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.995038 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 17:02:54 crc kubenswrapper[4856]: I0126 17:02:54.995746 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bxhpt" Jan 26 17:02:55 crc kubenswrapper[4856]: I0126 17:02:55.002075 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lfhpz"] Jan 26 17:02:55 crc kubenswrapper[4856]: I0126 17:02:55.038561 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee-utilities\") pod \"redhat-operators-lfhpz\" (UID: \"8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee\") " pod="openshift-marketplace/redhat-operators-lfhpz" Jan 26 17:02:55 crc kubenswrapper[4856]: I0126 17:02:55.038630 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee-catalog-content\") pod \"redhat-operators-lfhpz\" (UID: \"8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee\") " pod="openshift-marketplace/redhat-operators-lfhpz" Jan 26 17:02:55 crc kubenswrapper[4856]: I0126 17:02:55.038705 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm8n8\" (UniqueName: \"kubernetes.io/projected/8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee-kube-api-access-fm8n8\") pod \"redhat-operators-lfhpz\" (UID: \"8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee\") " pod="openshift-marketplace/redhat-operators-lfhpz" Jan 26 17:02:55 crc kubenswrapper[4856]: I0126 17:02:55.140309 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm8n8\" (UniqueName: \"kubernetes.io/projected/8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee-kube-api-access-fm8n8\") pod \"redhat-operators-lfhpz\" (UID: \"8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee\") " pod="openshift-marketplace/redhat-operators-lfhpz" Jan 26 17:02:55 crc kubenswrapper[4856]: I0126 17:02:55.140906 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee-utilities\") pod \"redhat-operators-lfhpz\" (UID: \"8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee\") " pod="openshift-marketplace/redhat-operators-lfhpz" Jan 26 17:02:55 crc kubenswrapper[4856]: I0126 17:02:55.140963 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee-catalog-content\") pod \"redhat-operators-lfhpz\" (UID: \"8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee\") " pod="openshift-marketplace/redhat-operators-lfhpz" Jan 26 17:02:55 crc kubenswrapper[4856]: I0126 17:02:55.141616 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee-catalog-content\") pod \"redhat-operators-lfhpz\" (UID: \"8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee\") " pod="openshift-marketplace/redhat-operators-lfhpz" Jan 26 17:02:55 crc kubenswrapper[4856]: I0126 17:02:55.143319 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee-utilities\") pod \"redhat-operators-lfhpz\" (UID: \"8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee\") " pod="openshift-marketplace/redhat-operators-lfhpz" Jan 26 17:02:55 crc kubenswrapper[4856]: I0126 17:02:55.163281 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm8n8\" (UniqueName: \"kubernetes.io/projected/8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee-kube-api-access-fm8n8\") pod \"redhat-operators-lfhpz\" (UID: \"8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee\") " pod="openshift-marketplace/redhat-operators-lfhpz" Jan 26 17:02:55 crc kubenswrapper[4856]: I0126 17:02:55.234817 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bxhpt"] Jan 26 17:02:55 crc kubenswrapper[4856]: I0126 17:02:55.312572 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lfhpz" Jan 26 17:02:55 crc kubenswrapper[4856]: I0126 17:02:55.423020 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d7eb7b8-63ae-493a-850b-0b9f3b42e927" path="/var/lib/kubelet/pods/0d7eb7b8-63ae-493a-850b-0b9f3b42e927/volumes" Jan 26 17:02:55 crc kubenswrapper[4856]: I0126 17:02:55.424918 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89cf05de-642b-4574-9f79-45e7a3d4afa3" path="/var/lib/kubelet/pods/89cf05de-642b-4574-9f79-45e7a3d4afa3/volumes" Jan 26 17:02:55 crc kubenswrapper[4856]: I0126 17:02:55.429538 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c71e219-35d7-4e1e-a371-3456dfd29e83" path="/var/lib/kubelet/pods/9c71e219-35d7-4e1e-a371-3456dfd29e83/volumes" Jan 26 17:02:55 crc kubenswrapper[4856]: I0126 17:02:55.430318 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3fa94fe-e4ad-4171-b853-89878dc61569" path="/var/lib/kubelet/pods/a3fa94fe-e4ad-4171-b853-89878dc61569/volumes" Jan 26 17:02:55 crc kubenswrapper[4856]: I0126 17:02:55.430940 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6086d4b-faeb-4a12-8e6a-2a178dfe374c" path="/var/lib/kubelet/pods/a6086d4b-faeb-4a12-8e6a-2a178dfe374c/volumes" Jan 26 17:02:55 crc kubenswrapper[4856]: I0126 17:02:55.715637 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lfhpz"] Jan 26 17:02:55 crc kubenswrapper[4856]: W0126 17:02:55.722437 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8cd36133_7a25_4dae_83a4_bbd0fbf1f2ee.slice/crio-2ef5584084d8b1d348bc061565a055f6ec51467023159b7ea87a382eaa85c020 WatchSource:0}: Error finding container 2ef5584084d8b1d348bc061565a055f6ec51467023159b7ea87a382eaa85c020: Status 404 returned error can't find the container with id 2ef5584084d8b1d348bc061565a055f6ec51467023159b7ea87a382eaa85c020 Jan 26 17:02:55 crc kubenswrapper[4856]: I0126 17:02:55.969991 4856 generic.go:334] "Generic (PLEG): container finished" podID="8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee" containerID="88586be463e7344004a3be66277fc71e033d018b5c03dfa5c56597b48d237e72" exitCode=0 Jan 26 17:02:55 crc kubenswrapper[4856]: I0126 17:02:55.970082 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lfhpz" event={"ID":"8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee","Type":"ContainerDied","Data":"88586be463e7344004a3be66277fc71e033d018b5c03dfa5c56597b48d237e72"} Jan 26 17:02:55 crc kubenswrapper[4856]: I0126 17:02:55.970256 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lfhpz" event={"ID":"8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee","Type":"ContainerStarted","Data":"2ef5584084d8b1d348bc061565a055f6ec51467023159b7ea87a382eaa85c020"} Jan 26 17:02:55 crc kubenswrapper[4856]: I0126 17:02:55.974776 4856 generic.go:334] "Generic (PLEG): container finished" podID="5f34c6a8-6023-480c-a25e-46f8c4f3766b" containerID="405a5e16f820190605a6762d0b7653fa1a6bedd12b761afcd55093867a05ee57" exitCode=0 Jan 26 17:02:55 crc kubenswrapper[4856]: I0126 17:02:55.974821 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bxhpt" event={"ID":"5f34c6a8-6023-480c-a25e-46f8c4f3766b","Type":"ContainerDied","Data":"405a5e16f820190605a6762d0b7653fa1a6bedd12b761afcd55093867a05ee57"} Jan 26 17:02:55 crc kubenswrapper[4856]: I0126 17:02:55.974850 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bxhpt" event={"ID":"5f34c6a8-6023-480c-a25e-46f8c4f3766b","Type":"ContainerStarted","Data":"ae868d389f2d56b098915bbed54fc03534f4fd1519a0d344eda69f6356db31f0"} Jan 26 17:02:56 crc kubenswrapper[4856]: I0126 17:02:56.779142 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-97mff"] Jan 26 17:02:56 crc kubenswrapper[4856]: I0126 17:02:56.789071 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-97mff" Jan 26 17:02:56 crc kubenswrapper[4856]: I0126 17:02:56.793615 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-97mff"] Jan 26 17:02:56 crc kubenswrapper[4856]: I0126 17:02:56.794315 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 17:02:56 crc kubenswrapper[4856]: I0126 17:02:56.861861 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/886857c0-659b-4904-b75a-c55c3f712747-utilities\") pod \"redhat-marketplace-97mff\" (UID: \"886857c0-659b-4904-b75a-c55c3f712747\") " pod="openshift-marketplace/redhat-marketplace-97mff" Jan 26 17:02:56 crc kubenswrapper[4856]: I0126 17:02:56.861984 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9487\" (UniqueName: \"kubernetes.io/projected/886857c0-659b-4904-b75a-c55c3f712747-kube-api-access-q9487\") pod \"redhat-marketplace-97mff\" (UID: \"886857c0-659b-4904-b75a-c55c3f712747\") " pod="openshift-marketplace/redhat-marketplace-97mff" Jan 26 17:02:56 crc kubenswrapper[4856]: I0126 17:02:56.862034 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/886857c0-659b-4904-b75a-c55c3f712747-catalog-content\") pod \"redhat-marketplace-97mff\" (UID: \"886857c0-659b-4904-b75a-c55c3f712747\") " pod="openshift-marketplace/redhat-marketplace-97mff" Jan 26 17:02:56 crc kubenswrapper[4856]: I0126 17:02:56.963128 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/886857c0-659b-4904-b75a-c55c3f712747-catalog-content\") pod \"redhat-marketplace-97mff\" (UID: \"886857c0-659b-4904-b75a-c55c3f712747\") " pod="openshift-marketplace/redhat-marketplace-97mff" Jan 26 17:02:56 crc kubenswrapper[4856]: I0126 17:02:56.963242 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/886857c0-659b-4904-b75a-c55c3f712747-utilities\") pod \"redhat-marketplace-97mff\" (UID: \"886857c0-659b-4904-b75a-c55c3f712747\") " pod="openshift-marketplace/redhat-marketplace-97mff" Jan 26 17:02:56 crc kubenswrapper[4856]: I0126 17:02:56.963323 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9487\" (UniqueName: \"kubernetes.io/projected/886857c0-659b-4904-b75a-c55c3f712747-kube-api-access-q9487\") pod \"redhat-marketplace-97mff\" (UID: \"886857c0-659b-4904-b75a-c55c3f712747\") " pod="openshift-marketplace/redhat-marketplace-97mff" Jan 26 17:02:56 crc kubenswrapper[4856]: I0126 17:02:56.963817 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/886857c0-659b-4904-b75a-c55c3f712747-catalog-content\") pod \"redhat-marketplace-97mff\" (UID: \"886857c0-659b-4904-b75a-c55c3f712747\") " pod="openshift-marketplace/redhat-marketplace-97mff" Jan 26 17:02:56 crc kubenswrapper[4856]: I0126 17:02:56.964099 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/886857c0-659b-4904-b75a-c55c3f712747-utilities\") pod \"redhat-marketplace-97mff\" (UID: \"886857c0-659b-4904-b75a-c55c3f712747\") " pod="openshift-marketplace/redhat-marketplace-97mff" Jan 26 17:02:57 crc kubenswrapper[4856]: I0126 17:02:57.055436 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9487\" (UniqueName: \"kubernetes.io/projected/886857c0-659b-4904-b75a-c55c3f712747-kube-api-access-q9487\") pod \"redhat-marketplace-97mff\" (UID: \"886857c0-659b-4904-b75a-c55c3f712747\") " pod="openshift-marketplace/redhat-marketplace-97mff" Jan 26 17:02:57 crc kubenswrapper[4856]: I0126 17:02:57.115951 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-97mff" Jan 26 17:02:57 crc kubenswrapper[4856]: I0126 17:02:57.303018 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-97mff"] Jan 26 17:02:57 crc kubenswrapper[4856]: W0126 17:02:57.310753 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod886857c0_659b_4904_b75a_c55c3f712747.slice/crio-2063535f537d4fe37e3e34708f04c20619c6cc50b85697e69a9333b26c91a793 WatchSource:0}: Error finding container 2063535f537d4fe37e3e34708f04c20619c6cc50b85697e69a9333b26c91a793: Status 404 returned error can't find the container with id 2063535f537d4fe37e3e34708f04c20619c6cc50b85697e69a9333b26c91a793 Jan 26 17:02:57 crc kubenswrapper[4856]: I0126 17:02:57.377451 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gdp2n"] Jan 26 17:02:57 crc kubenswrapper[4856]: I0126 17:02:57.379285 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gdp2n" Jan 26 17:02:57 crc kubenswrapper[4856]: I0126 17:02:57.382601 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 17:02:57 crc kubenswrapper[4856]: I0126 17:02:57.385564 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gdp2n"] Jan 26 17:02:57 crc kubenswrapper[4856]: I0126 17:02:57.571030 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4327b726-2edc-40ad-ac96-b19a7e020048-utilities\") pod \"community-operators-gdp2n\" (UID: \"4327b726-2edc-40ad-ac96-b19a7e020048\") " pod="openshift-marketplace/community-operators-gdp2n" Jan 26 17:02:57 crc kubenswrapper[4856]: I0126 17:02:57.571097 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4327b726-2edc-40ad-ac96-b19a7e020048-catalog-content\") pod \"community-operators-gdp2n\" (UID: \"4327b726-2edc-40ad-ac96-b19a7e020048\") " pod="openshift-marketplace/community-operators-gdp2n" Jan 26 17:02:57 crc kubenswrapper[4856]: I0126 17:02:57.571188 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp8x9\" (UniqueName: \"kubernetes.io/projected/4327b726-2edc-40ad-ac96-b19a7e020048-kube-api-access-tp8x9\") pod \"community-operators-gdp2n\" (UID: \"4327b726-2edc-40ad-ac96-b19a7e020048\") " pod="openshift-marketplace/community-operators-gdp2n" Jan 26 17:02:57 crc kubenswrapper[4856]: I0126 17:02:57.672435 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4327b726-2edc-40ad-ac96-b19a7e020048-catalog-content\") pod \"community-operators-gdp2n\" (UID: \"4327b726-2edc-40ad-ac96-b19a7e020048\") " pod="openshift-marketplace/community-operators-gdp2n" Jan 26 17:02:57 crc kubenswrapper[4856]: I0126 17:02:57.672566 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp8x9\" (UniqueName: \"kubernetes.io/projected/4327b726-2edc-40ad-ac96-b19a7e020048-kube-api-access-tp8x9\") pod \"community-operators-gdp2n\" (UID: \"4327b726-2edc-40ad-ac96-b19a7e020048\") " pod="openshift-marketplace/community-operators-gdp2n" Jan 26 17:02:57 crc kubenswrapper[4856]: I0126 17:02:57.672649 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4327b726-2edc-40ad-ac96-b19a7e020048-utilities\") pod \"community-operators-gdp2n\" (UID: \"4327b726-2edc-40ad-ac96-b19a7e020048\") " pod="openshift-marketplace/community-operators-gdp2n" Jan 26 17:02:57 crc kubenswrapper[4856]: I0126 17:02:57.673240 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4327b726-2edc-40ad-ac96-b19a7e020048-catalog-content\") pod \"community-operators-gdp2n\" (UID: \"4327b726-2edc-40ad-ac96-b19a7e020048\") " pod="openshift-marketplace/community-operators-gdp2n" Jan 26 17:02:57 crc kubenswrapper[4856]: I0126 17:02:57.673467 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4327b726-2edc-40ad-ac96-b19a7e020048-utilities\") pod \"community-operators-gdp2n\" (UID: \"4327b726-2edc-40ad-ac96-b19a7e020048\") " pod="openshift-marketplace/community-operators-gdp2n" Jan 26 17:02:57 crc kubenswrapper[4856]: I0126 17:02:57.696955 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp8x9\" (UniqueName: \"kubernetes.io/projected/4327b726-2edc-40ad-ac96-b19a7e020048-kube-api-access-tp8x9\") pod \"community-operators-gdp2n\" (UID: \"4327b726-2edc-40ad-ac96-b19a7e020048\") " pod="openshift-marketplace/community-operators-gdp2n" Jan 26 17:02:57 crc kubenswrapper[4856]: I0126 17:02:57.810428 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gdp2n" Jan 26 17:02:58 crc kubenswrapper[4856]: I0126 17:02:58.108064 4856 generic.go:334] "Generic (PLEG): container finished" podID="5f34c6a8-6023-480c-a25e-46f8c4f3766b" containerID="ab4dc2168cf3030f71b121144c073ab78a14a965d8feaf1a0de933f786e1cb89" exitCode=0 Jan 26 17:02:58 crc kubenswrapper[4856]: I0126 17:02:58.108170 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bxhpt" event={"ID":"5f34c6a8-6023-480c-a25e-46f8c4f3766b","Type":"ContainerDied","Data":"ab4dc2168cf3030f71b121144c073ab78a14a965d8feaf1a0de933f786e1cb89"} Jan 26 17:02:58 crc kubenswrapper[4856]: I0126 17:02:58.111680 4856 generic.go:334] "Generic (PLEG): container finished" podID="886857c0-659b-4904-b75a-c55c3f712747" containerID="8cce484e79d411777eb43ce1a40864e7613f816cb566efdd41677d117f9c3633" exitCode=0 Jan 26 17:02:58 crc kubenswrapper[4856]: I0126 17:02:58.111731 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-97mff" event={"ID":"886857c0-659b-4904-b75a-c55c3f712747","Type":"ContainerDied","Data":"8cce484e79d411777eb43ce1a40864e7613f816cb566efdd41677d117f9c3633"} Jan 26 17:02:58 crc kubenswrapper[4856]: I0126 17:02:58.111756 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-97mff" event={"ID":"886857c0-659b-4904-b75a-c55c3f712747","Type":"ContainerStarted","Data":"2063535f537d4fe37e3e34708f04c20619c6cc50b85697e69a9333b26c91a793"} Jan 26 17:02:58 crc kubenswrapper[4856]: I0126 17:02:58.425918 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gdp2n"] Jan 26 17:02:59 crc kubenswrapper[4856]: I0126 17:02:59.120005 4856 generic.go:334] "Generic (PLEG): container finished" podID="4327b726-2edc-40ad-ac96-b19a7e020048" containerID="d5222989432010fc64c5d354d24c5e19cfaecc54d2025e00e9a6eb627c8732c1" exitCode=0 Jan 26 17:02:59 crc kubenswrapper[4856]: I0126 17:02:59.120078 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdp2n" event={"ID":"4327b726-2edc-40ad-ac96-b19a7e020048","Type":"ContainerDied","Data":"d5222989432010fc64c5d354d24c5e19cfaecc54d2025e00e9a6eb627c8732c1"} Jan 26 17:02:59 crc kubenswrapper[4856]: I0126 17:02:59.120641 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdp2n" event={"ID":"4327b726-2edc-40ad-ac96-b19a7e020048","Type":"ContainerStarted","Data":"d2be1441bb0d46a1b01561dfecdd886e3af79e4093d59f1a2017ef581ded6586"} Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.128313 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bxhpt" event={"ID":"5f34c6a8-6023-480c-a25e-46f8c4f3766b","Type":"ContainerStarted","Data":"094471a22cb2be3806f3e0c5d6e07e36d6a5a423b089fe73b28707ddde1dde10"} Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.130069 4856 generic.go:334] "Generic (PLEG): container finished" podID="886857c0-659b-4904-b75a-c55c3f712747" containerID="a75ef75367730507a8b7594226c5e9d4e14716073f574dda81c029b084dafd94" exitCode=0 Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.130334 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-97mff" event={"ID":"886857c0-659b-4904-b75a-c55c3f712747","Type":"ContainerDied","Data":"a75ef75367730507a8b7594226c5e9d4e14716073f574dda81c029b084dafd94"} Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.133244 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdp2n" event={"ID":"4327b726-2edc-40ad-ac96-b19a7e020048","Type":"ContainerStarted","Data":"3c2c6909ed99198befbbb9ed59971fc2aac68a42311872081e29bb9546929cc6"} Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.151115 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bxhpt" podStartSLOduration=3.113900137 podStartE2EDuration="6.151097845s" podCreationTimestamp="2026-01-26 17:02:54 +0000 UTC" firstStartedPulling="2026-01-26 17:02:55.97615485 +0000 UTC m=+271.929408831" lastFinishedPulling="2026-01-26 17:02:59.013352558 +0000 UTC m=+274.966606539" observedRunningTime="2026-01-26 17:03:00.149702565 +0000 UTC m=+276.102956546" watchObservedRunningTime="2026-01-26 17:03:00.151097845 +0000 UTC m=+276.104351826" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.479645 4856 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.480803 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://89975b8f9428f81ab5d3fb48ced5dd9c837bea2feea3b89f5f7ff8d7d5d15b3e" gracePeriod=15 Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.481046 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://b03a01b651d9f66da4dd1f6e0d29ad97c0d6ae46b644c3d997d8dc99476706df" gracePeriod=15 Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.481116 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://f2daa3f13d37b7ae19dfca406b6b5cfdcd4f211287f7d410193f2fee36a24553" gracePeriod=15 Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.481172 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://40d432159a07ba20bf95f058bf8a597d67cbd8d852519bed035694f8ba3d8ec4" gracePeriod=15 Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.481213 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://59ba0f131eee14df389391e46bc772894e49e976c3663df5fb3bc98b3cb4a3d6" gracePeriod=15 Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.485557 4856 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 17:03:00 crc kubenswrapper[4856]: E0126 17:03:00.485818 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.485840 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 17:03:00 crc kubenswrapper[4856]: E0126 17:03:00.485856 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.485863 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 17:03:00 crc kubenswrapper[4856]: E0126 17:03:00.485873 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.485880 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 17:03:00 crc kubenswrapper[4856]: E0126 17:03:00.485901 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.485928 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 17:03:00 crc kubenswrapper[4856]: E0126 17:03:00.485936 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.485942 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 17:03:00 crc kubenswrapper[4856]: E0126 17:03:00.485948 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.485955 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 17:03:00 crc kubenswrapper[4856]: E0126 17:03:00.485965 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.485970 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.486225 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.486245 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.486258 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.486266 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.486273 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.486283 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.486294 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 17:03:00 crc kubenswrapper[4856]: E0126 17:03:00.486405 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.486413 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.488475 4856 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.488944 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.489630 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.489739 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.489772 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.494107 4856 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.552991 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 17:03:00 crc kubenswrapper[4856]: E0126 17:03:00.574142 4856 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.241:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-97mff.188e56a765f7220b openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-97mff,UID:886857c0-659b-4904-b75a-c55c3f712747,APIVersion:v1,ResourceVersion:29932,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 441ms (442ms including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 17:03:00.573405707 +0000 UTC m=+276.526659678,LastTimestamp:2026-01-26 17:03:00.573405707 +0000 UTC m=+276.526659678,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.594911 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.594964 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.594982 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.595018 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.595049 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.595068 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.595092 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.595106 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.595205 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.595237 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.595275 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.696295 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.696338 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.696377 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.696448 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.696483 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.696496 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.696570 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.696579 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.696633 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.696662 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 17:03:00 crc kubenswrapper[4856]: I0126 17:03:00.850688 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 17:03:01 crc kubenswrapper[4856]: I0126 17:03:01.141212 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-97mff" event={"ID":"886857c0-659b-4904-b75a-c55c3f712747","Type":"ContainerStarted","Data":"86283045d7d1049d9d8358f985c6aac8c275ef1f0b7a9715b13fd30bd1c328e7"} Jan 26 17:03:01 crc kubenswrapper[4856]: I0126 17:03:01.141823 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:01 crc kubenswrapper[4856]: I0126 17:03:01.142046 4856 status_manager.go:851] "Failed to get status for pod" podUID="886857c0-659b-4904-b75a-c55c3f712747" pod="openshift-marketplace/redhat-marketplace-97mff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-97mff\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:01 crc kubenswrapper[4856]: I0126 17:03:01.142071 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"57d0defac9d2de663e30454f59f9c50c448b069b057e3908291a344d9995f94b"} Jan 26 17:03:01 crc kubenswrapper[4856]: I0126 17:03:01.144143 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 26 17:03:01 crc kubenswrapper[4856]: I0126 17:03:01.145433 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 17:03:01 crc kubenswrapper[4856]: I0126 17:03:01.146100 4856 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b03a01b651d9f66da4dd1f6e0d29ad97c0d6ae46b644c3d997d8dc99476706df" exitCode=0 Jan 26 17:03:01 crc kubenswrapper[4856]: I0126 17:03:01.146131 4856 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f2daa3f13d37b7ae19dfca406b6b5cfdcd4f211287f7d410193f2fee36a24553" exitCode=0 Jan 26 17:03:01 crc kubenswrapper[4856]: I0126 17:03:01.146145 4856 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="40d432159a07ba20bf95f058bf8a597d67cbd8d852519bed035694f8ba3d8ec4" exitCode=0 Jan 26 17:03:01 crc kubenswrapper[4856]: I0126 17:03:01.146154 4856 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="59ba0f131eee14df389391e46bc772894e49e976c3663df5fb3bc98b3cb4a3d6" exitCode=2 Jan 26 17:03:01 crc kubenswrapper[4856]: I0126 17:03:01.146134 4856 scope.go:117] "RemoveContainer" containerID="3f07438e20bdf71c752bb661084c835341999c561bb5442d75e177223881276f" Jan 26 17:03:01 crc kubenswrapper[4856]: I0126 17:03:01.150682 4856 generic.go:334] "Generic (PLEG): container finished" podID="4327b726-2edc-40ad-ac96-b19a7e020048" containerID="3c2c6909ed99198befbbb9ed59971fc2aac68a42311872081e29bb9546929cc6" exitCode=0 Jan 26 17:03:01 crc kubenswrapper[4856]: I0126 17:03:01.150768 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdp2n" event={"ID":"4327b726-2edc-40ad-ac96-b19a7e020048","Type":"ContainerDied","Data":"3c2c6909ed99198befbbb9ed59971fc2aac68a42311872081e29bb9546929cc6"} Jan 26 17:03:01 crc kubenswrapper[4856]: I0126 17:03:01.151356 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:01 crc kubenswrapper[4856]: I0126 17:03:01.151577 4856 status_manager.go:851] "Failed to get status for pod" podUID="4327b726-2edc-40ad-ac96-b19a7e020048" pod="openshift-marketplace/community-operators-gdp2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gdp2n\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:01 crc kubenswrapper[4856]: I0126 17:03:01.151753 4856 status_manager.go:851] "Failed to get status for pod" podUID="886857c0-659b-4904-b75a-c55c3f712747" pod="openshift-marketplace/redhat-marketplace-97mff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-97mff\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:01 crc kubenswrapper[4856]: I0126 17:03:01.153502 4856 generic.go:334] "Generic (PLEG): container finished" podID="69379820-3062-4964-a8dd-8689f8cea38d" containerID="beeb8e8929ad597a53e5bcbe203dbd0aeea7fb6f4cfbcd350384cfbddded9459" exitCode=0 Jan 26 17:03:01 crc kubenswrapper[4856]: I0126 17:03:01.153997 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"69379820-3062-4964-a8dd-8689f8cea38d","Type":"ContainerDied","Data":"beeb8e8929ad597a53e5bcbe203dbd0aeea7fb6f4cfbcd350384cfbddded9459"} Jan 26 17:03:01 crc kubenswrapper[4856]: I0126 17:03:01.154390 4856 status_manager.go:851] "Failed to get status for pod" podUID="69379820-3062-4964-a8dd-8689f8cea38d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:01 crc kubenswrapper[4856]: I0126 17:03:01.154842 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:01 crc kubenswrapper[4856]: I0126 17:03:01.155025 4856 status_manager.go:851] "Failed to get status for pod" podUID="4327b726-2edc-40ad-ac96-b19a7e020048" pod="openshift-marketplace/community-operators-gdp2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gdp2n\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:01 crc kubenswrapper[4856]: I0126 17:03:01.155167 4856 status_manager.go:851] "Failed to get status for pod" podUID="886857c0-659b-4904-b75a-c55c3f712747" pod="openshift-marketplace/redhat-marketplace-97mff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-97mff\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:02 crc kubenswrapper[4856]: I0126 17:03:02.160851 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"b6f864463a7443541f4490597f84a8f03e6d8c1a587e47002bb795632c0df2d6"} Jan 26 17:03:02 crc kubenswrapper[4856]: I0126 17:03:02.162397 4856 status_manager.go:851] "Failed to get status for pod" podUID="69379820-3062-4964-a8dd-8689f8cea38d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:02 crc kubenswrapper[4856]: I0126 17:03:02.162795 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:02 crc kubenswrapper[4856]: I0126 17:03:02.163191 4856 status_manager.go:851] "Failed to get status for pod" podUID="4327b726-2edc-40ad-ac96-b19a7e020048" pod="openshift-marketplace/community-operators-gdp2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gdp2n\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:02 crc kubenswrapper[4856]: I0126 17:03:02.163414 4856 status_manager.go:851] "Failed to get status for pod" podUID="886857c0-659b-4904-b75a-c55c3f712747" pod="openshift-marketplace/redhat-marketplace-97mff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-97mff\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:02 crc kubenswrapper[4856]: I0126 17:03:02.164855 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 17:03:02 crc kubenswrapper[4856]: I0126 17:03:02.167901 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdp2n" event={"ID":"4327b726-2edc-40ad-ac96-b19a7e020048","Type":"ContainerStarted","Data":"c8e6dd9ff8ca391ce4dc51ffe3ea5566118b1b3b87a7744e96837eab0a37a59a"} Jan 26 17:03:02 crc kubenswrapper[4856]: I0126 17:03:02.168752 4856 status_manager.go:851] "Failed to get status for pod" podUID="886857c0-659b-4904-b75a-c55c3f712747" pod="openshift-marketplace/redhat-marketplace-97mff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-97mff\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:02 crc kubenswrapper[4856]: I0126 17:03:02.169232 4856 status_manager.go:851] "Failed to get status for pod" podUID="69379820-3062-4964-a8dd-8689f8cea38d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:02 crc kubenswrapper[4856]: I0126 17:03:02.169615 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:02 crc kubenswrapper[4856]: I0126 17:03:02.169826 4856 status_manager.go:851] "Failed to get status for pod" podUID="4327b726-2edc-40ad-ac96-b19a7e020048" pod="openshift-marketplace/community-operators-gdp2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gdp2n\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:02 crc kubenswrapper[4856]: I0126 17:03:02.480843 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 17:03:02 crc kubenswrapper[4856]: I0126 17:03:02.482133 4856 status_manager.go:851] "Failed to get status for pod" podUID="886857c0-659b-4904-b75a-c55c3f712747" pod="openshift-marketplace/redhat-marketplace-97mff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-97mff\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:02 crc kubenswrapper[4856]: I0126 17:03:02.482668 4856 status_manager.go:851] "Failed to get status for pod" podUID="69379820-3062-4964-a8dd-8689f8cea38d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:02 crc kubenswrapper[4856]: I0126 17:03:02.483030 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:02 crc kubenswrapper[4856]: I0126 17:03:02.483471 4856 status_manager.go:851] "Failed to get status for pod" podUID="4327b726-2edc-40ad-ac96-b19a7e020048" pod="openshift-marketplace/community-operators-gdp2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gdp2n\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:02 crc kubenswrapper[4856]: I0126 17:03:02.623008 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69379820-3062-4964-a8dd-8689f8cea38d-kubelet-dir\") pod \"69379820-3062-4964-a8dd-8689f8cea38d\" (UID: \"69379820-3062-4964-a8dd-8689f8cea38d\") " Jan 26 17:03:02 crc kubenswrapper[4856]: I0126 17:03:02.623466 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69379820-3062-4964-a8dd-8689f8cea38d-kube-api-access\") pod \"69379820-3062-4964-a8dd-8689f8cea38d\" (UID: \"69379820-3062-4964-a8dd-8689f8cea38d\") " Jan 26 17:03:02 crc kubenswrapper[4856]: I0126 17:03:02.623498 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/69379820-3062-4964-a8dd-8689f8cea38d-var-lock\") pod \"69379820-3062-4964-a8dd-8689f8cea38d\" (UID: \"69379820-3062-4964-a8dd-8689f8cea38d\") " Jan 26 17:03:03 crc kubenswrapper[4856]: I0126 17:03:02.623920 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69379820-3062-4964-a8dd-8689f8cea38d-var-lock" (OuterVolumeSpecName: "var-lock") pod "69379820-3062-4964-a8dd-8689f8cea38d" (UID: "69379820-3062-4964-a8dd-8689f8cea38d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:03:03 crc kubenswrapper[4856]: I0126 17:03:02.623937 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69379820-3062-4964-a8dd-8689f8cea38d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "69379820-3062-4964-a8dd-8689f8cea38d" (UID: "69379820-3062-4964-a8dd-8689f8cea38d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:03:03 crc kubenswrapper[4856]: I0126 17:03:02.629837 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69379820-3062-4964-a8dd-8689f8cea38d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "69379820-3062-4964-a8dd-8689f8cea38d" (UID: "69379820-3062-4964-a8dd-8689f8cea38d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:03:03 crc kubenswrapper[4856]: I0126 17:03:02.725137 4856 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69379820-3062-4964-a8dd-8689f8cea38d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 17:03:03 crc kubenswrapper[4856]: I0126 17:03:02.725168 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69379820-3062-4964-a8dd-8689f8cea38d-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 17:03:03 crc kubenswrapper[4856]: I0126 17:03:02.725179 4856 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/69379820-3062-4964-a8dd-8689f8cea38d-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 17:03:03 crc kubenswrapper[4856]: I0126 17:03:03.178438 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"69379820-3062-4964-a8dd-8689f8cea38d","Type":"ContainerDied","Data":"af60ab5d4a2b57ad1bbcc4a879fdc9dce5f1b3ef1e2f5eb96e13241cdf6f2277"} Jan 26 17:03:03 crc kubenswrapper[4856]: I0126 17:03:03.178737 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af60ab5d4a2b57ad1bbcc4a879fdc9dce5f1b3ef1e2f5eb96e13241cdf6f2277" Jan 26 17:03:03 crc kubenswrapper[4856]: I0126 17:03:03.178804 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 17:03:03 crc kubenswrapper[4856]: I0126 17:03:03.188062 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 17:03:03 crc kubenswrapper[4856]: I0126 17:03:03.189085 4856 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="89975b8f9428f81ab5d3fb48ced5dd9c837bea2feea3b89f5f7ff8d7d5d15b3e" exitCode=0 Jan 26 17:03:04 crc kubenswrapper[4856]: I0126 17:03:04.209999 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 17:03:04 crc kubenswrapper[4856]: I0126 17:03:04.214179 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4365b662a1ef1f25c43b0d9068b29f4b8c92282da9679a062ca15b8955aa46e5" Jan 26 17:03:04 crc kubenswrapper[4856]: I0126 17:03:04.574418 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 17:03:04 crc kubenswrapper[4856]: I0126 17:03:04.575969 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:03:04 crc kubenswrapper[4856]: I0126 17:03:04.576980 4856 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:04 crc kubenswrapper[4856]: I0126 17:03:04.577724 4856 status_manager.go:851] "Failed to get status for pod" podUID="69379820-3062-4964-a8dd-8689f8cea38d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:04 crc kubenswrapper[4856]: I0126 17:03:04.578063 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:04 crc kubenswrapper[4856]: I0126 17:03:04.578248 4856 status_manager.go:851] "Failed to get status for pod" podUID="4327b726-2edc-40ad-ac96-b19a7e020048" pod="openshift-marketplace/community-operators-gdp2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gdp2n\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:04 crc kubenswrapper[4856]: I0126 17:03:04.578422 4856 status_manager.go:851] "Failed to get status for pod" podUID="886857c0-659b-4904-b75a-c55c3f712747" pod="openshift-marketplace/redhat-marketplace-97mff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-97mff\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:04 crc kubenswrapper[4856]: E0126 17:03:04.591249 4856 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:04 crc kubenswrapper[4856]: E0126 17:03:04.591850 4856 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:04 crc kubenswrapper[4856]: E0126 17:03:04.592454 4856 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:04 crc kubenswrapper[4856]: E0126 17:03:04.592789 4856 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:04 crc kubenswrapper[4856]: E0126 17:03:04.593163 4856 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:04 crc kubenswrapper[4856]: I0126 17:03:04.593213 4856 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 26 17:03:04 crc kubenswrapper[4856]: E0126 17:03:04.593704 4856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" interval="200ms" Jan 26 17:03:04 crc kubenswrapper[4856]: I0126 17:03:04.754053 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:03:04 crc kubenswrapper[4856]: I0126 17:03:04.754397 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 17:03:04 crc kubenswrapper[4856]: I0126 17:03:04.754537 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 17:03:04 crc kubenswrapper[4856]: I0126 17:03:04.754630 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:03:04 crc kubenswrapper[4856]: I0126 17:03:04.754712 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 17:03:04 crc kubenswrapper[4856]: I0126 17:03:04.754794 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:03:04 crc kubenswrapper[4856]: I0126 17:03:04.755199 4856 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 17:03:04 crc kubenswrapper[4856]: I0126 17:03:04.755224 4856 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 26 17:03:04 crc kubenswrapper[4856]: I0126 17:03:04.755236 4856 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 17:03:04 crc kubenswrapper[4856]: E0126 17:03:04.795756 4856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" interval="400ms" Jan 26 17:03:04 crc kubenswrapper[4856]: I0126 17:03:04.997090 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bxhpt" Jan 26 17:03:04 crc kubenswrapper[4856]: I0126 17:03:04.997154 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bxhpt" Jan 26 17:03:05 crc kubenswrapper[4856]: I0126 17:03:05.038624 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bxhpt" Jan 26 17:03:05 crc kubenswrapper[4856]: I0126 17:03:05.039254 4856 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:05 crc kubenswrapper[4856]: I0126 17:03:05.039741 4856 status_manager.go:851] "Failed to get status for pod" podUID="69379820-3062-4964-a8dd-8689f8cea38d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:05 crc kubenswrapper[4856]: I0126 17:03:05.040096 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:05 crc kubenswrapper[4856]: I0126 17:03:05.040485 4856 status_manager.go:851] "Failed to get status for pod" podUID="4327b726-2edc-40ad-ac96-b19a7e020048" pod="openshift-marketplace/community-operators-gdp2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gdp2n\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:05 crc kubenswrapper[4856]: I0126 17:03:05.040746 4856 status_manager.go:851] "Failed to get status for pod" podUID="5f34c6a8-6023-480c-a25e-46f8c4f3766b" pod="openshift-marketplace/certified-operators-bxhpt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-bxhpt\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:05 crc kubenswrapper[4856]: I0126 17:03:05.041029 4856 status_manager.go:851] "Failed to get status for pod" podUID="886857c0-659b-4904-b75a-c55c3f712747" pod="openshift-marketplace/redhat-marketplace-97mff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-97mff\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:05 crc kubenswrapper[4856]: E0126 17:03:05.197413 4856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" interval="800ms" Jan 26 17:03:05 crc kubenswrapper[4856]: I0126 17:03:05.219126 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:03:05 crc kubenswrapper[4856]: I0126 17:03:05.236018 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:05 crc kubenswrapper[4856]: I0126 17:03:05.236421 4856 status_manager.go:851] "Failed to get status for pod" podUID="4327b726-2edc-40ad-ac96-b19a7e020048" pod="openshift-marketplace/community-operators-gdp2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gdp2n\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:05 crc kubenswrapper[4856]: I0126 17:03:05.236959 4856 status_manager.go:851] "Failed to get status for pod" podUID="5f34c6a8-6023-480c-a25e-46f8c4f3766b" pod="openshift-marketplace/certified-operators-bxhpt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-bxhpt\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:05 crc kubenswrapper[4856]: I0126 17:03:05.237237 4856 status_manager.go:851] "Failed to get status for pod" podUID="886857c0-659b-4904-b75a-c55c3f712747" pod="openshift-marketplace/redhat-marketplace-97mff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-97mff\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:05 crc kubenswrapper[4856]: I0126 17:03:05.237553 4856 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:05 crc kubenswrapper[4856]: I0126 17:03:05.237888 4856 status_manager.go:851] "Failed to get status for pod" podUID="69379820-3062-4964-a8dd-8689f8cea38d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:05 crc kubenswrapper[4856]: I0126 17:03:05.257763 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bxhpt" Jan 26 17:03:05 crc kubenswrapper[4856]: I0126 17:03:05.258302 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:05 crc kubenswrapper[4856]: I0126 17:03:05.258789 4856 status_manager.go:851] "Failed to get status for pod" podUID="4327b726-2edc-40ad-ac96-b19a7e020048" pod="openshift-marketplace/community-operators-gdp2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gdp2n\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:05 crc kubenswrapper[4856]: I0126 17:03:05.259330 4856 status_manager.go:851] "Failed to get status for pod" podUID="5f34c6a8-6023-480c-a25e-46f8c4f3766b" pod="openshift-marketplace/certified-operators-bxhpt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-bxhpt\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:05 crc kubenswrapper[4856]: I0126 17:03:05.259789 4856 status_manager.go:851] "Failed to get status for pod" podUID="886857c0-659b-4904-b75a-c55c3f712747" pod="openshift-marketplace/redhat-marketplace-97mff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-97mff\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:05 crc kubenswrapper[4856]: I0126 17:03:05.260097 4856 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:05 crc kubenswrapper[4856]: I0126 17:03:05.260624 4856 status_manager.go:851] "Failed to get status for pod" podUID="69379820-3062-4964-a8dd-8689f8cea38d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:05 crc kubenswrapper[4856]: I0126 17:03:05.397699 4856 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:05 crc kubenswrapper[4856]: I0126 17:03:05.398092 4856 status_manager.go:851] "Failed to get status for pod" podUID="69379820-3062-4964-a8dd-8689f8cea38d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:05 crc kubenswrapper[4856]: I0126 17:03:05.398396 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:05 crc kubenswrapper[4856]: I0126 17:03:05.398707 4856 status_manager.go:851] "Failed to get status for pod" podUID="4327b726-2edc-40ad-ac96-b19a7e020048" pod="openshift-marketplace/community-operators-gdp2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gdp2n\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:05 crc kubenswrapper[4856]: I0126 17:03:05.398937 4856 status_manager.go:851] "Failed to get status for pod" podUID="5f34c6a8-6023-480c-a25e-46f8c4f3766b" pod="openshift-marketplace/certified-operators-bxhpt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-bxhpt\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:05 crc kubenswrapper[4856]: I0126 17:03:05.399188 4856 status_manager.go:851] "Failed to get status for pod" podUID="886857c0-659b-4904-b75a-c55c3f712747" pod="openshift-marketplace/redhat-marketplace-97mff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-97mff\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:05 crc kubenswrapper[4856]: I0126 17:03:05.401723 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 26 17:03:05 crc kubenswrapper[4856]: E0126 17:03:05.998167 4856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" interval="1.6s" Jan 26 17:03:06 crc kubenswrapper[4856]: I0126 17:03:06.229018 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lfhpz" event={"ID":"8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee","Type":"ContainerStarted","Data":"fb660ac43070a8988110315cf928def975b7819ef69dc6d82da88c39e5107bbb"} Jan 26 17:03:06 crc kubenswrapper[4856]: I0126 17:03:06.229118 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:06 crc kubenswrapper[4856]: I0126 17:03:06.229308 4856 status_manager.go:851] "Failed to get status for pod" podUID="4327b726-2edc-40ad-ac96-b19a7e020048" pod="openshift-marketplace/community-operators-gdp2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gdp2n\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:06 crc kubenswrapper[4856]: I0126 17:03:06.229516 4856 status_manager.go:851] "Failed to get status for pod" podUID="5f34c6a8-6023-480c-a25e-46f8c4f3766b" pod="openshift-marketplace/certified-operators-bxhpt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-bxhpt\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:06 crc kubenswrapper[4856]: I0126 17:03:06.229713 4856 status_manager.go:851] "Failed to get status for pod" podUID="8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee" pod="openshift-marketplace/redhat-operators-lfhpz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lfhpz\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:06 crc kubenswrapper[4856]: I0126 17:03:06.229861 4856 status_manager.go:851] "Failed to get status for pod" podUID="886857c0-659b-4904-b75a-c55c3f712747" pod="openshift-marketplace/redhat-marketplace-97mff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-97mff\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:06 crc kubenswrapper[4856]: I0126 17:03:06.230005 4856 status_manager.go:851] "Failed to get status for pod" podUID="69379820-3062-4964-a8dd-8689f8cea38d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:07 crc kubenswrapper[4856]: I0126 17:03:07.116591 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-97mff" Jan 26 17:03:07 crc kubenswrapper[4856]: I0126 17:03:07.117070 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-97mff" Jan 26 17:03:07 crc kubenswrapper[4856]: I0126 17:03:07.157140 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-97mff" Jan 26 17:03:07 crc kubenswrapper[4856]: I0126 17:03:07.157864 4856 status_manager.go:851] "Failed to get status for pod" podUID="5f34c6a8-6023-480c-a25e-46f8c4f3766b" pod="openshift-marketplace/certified-operators-bxhpt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-bxhpt\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:07 crc kubenswrapper[4856]: I0126 17:03:07.158349 4856 status_manager.go:851] "Failed to get status for pod" podUID="8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee" pod="openshift-marketplace/redhat-operators-lfhpz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lfhpz\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:07 crc kubenswrapper[4856]: I0126 17:03:07.158791 4856 status_manager.go:851] "Failed to get status for pod" podUID="886857c0-659b-4904-b75a-c55c3f712747" pod="openshift-marketplace/redhat-marketplace-97mff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-97mff\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:07 crc kubenswrapper[4856]: I0126 17:03:07.159088 4856 status_manager.go:851] "Failed to get status for pod" podUID="69379820-3062-4964-a8dd-8689f8cea38d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:07 crc kubenswrapper[4856]: I0126 17:03:07.159373 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:07 crc kubenswrapper[4856]: I0126 17:03:07.159620 4856 status_manager.go:851] "Failed to get status for pod" podUID="4327b726-2edc-40ad-ac96-b19a7e020048" pod="openshift-marketplace/community-operators-gdp2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gdp2n\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:07 crc kubenswrapper[4856]: I0126 17:03:07.237317 4856 generic.go:334] "Generic (PLEG): container finished" podID="8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee" containerID="fb660ac43070a8988110315cf928def975b7819ef69dc6d82da88c39e5107bbb" exitCode=0 Jan 26 17:03:07 crc kubenswrapper[4856]: I0126 17:03:07.237427 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lfhpz" event={"ID":"8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee","Type":"ContainerDied","Data":"fb660ac43070a8988110315cf928def975b7819ef69dc6d82da88c39e5107bbb"} Jan 26 17:03:07 crc kubenswrapper[4856]: I0126 17:03:07.237976 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:07 crc kubenswrapper[4856]: I0126 17:03:07.238230 4856 status_manager.go:851] "Failed to get status for pod" podUID="4327b726-2edc-40ad-ac96-b19a7e020048" pod="openshift-marketplace/community-operators-gdp2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gdp2n\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:07 crc kubenswrapper[4856]: I0126 17:03:07.238553 4856 status_manager.go:851] "Failed to get status for pod" podUID="5f34c6a8-6023-480c-a25e-46f8c4f3766b" pod="openshift-marketplace/certified-operators-bxhpt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-bxhpt\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:07 crc kubenswrapper[4856]: I0126 17:03:07.240106 4856 status_manager.go:851] "Failed to get status for pod" podUID="8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee" pod="openshift-marketplace/redhat-operators-lfhpz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lfhpz\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:07 crc kubenswrapper[4856]: I0126 17:03:07.240411 4856 status_manager.go:851] "Failed to get status for pod" podUID="886857c0-659b-4904-b75a-c55c3f712747" pod="openshift-marketplace/redhat-marketplace-97mff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-97mff\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:07 crc kubenswrapper[4856]: I0126 17:03:07.240707 4856 status_manager.go:851] "Failed to get status for pod" podUID="69379820-3062-4964-a8dd-8689f8cea38d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:07 crc kubenswrapper[4856]: I0126 17:03:07.276999 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-97mff" Jan 26 17:03:07 crc kubenswrapper[4856]: I0126 17:03:07.277731 4856 status_manager.go:851] "Failed to get status for pod" podUID="886857c0-659b-4904-b75a-c55c3f712747" pod="openshift-marketplace/redhat-marketplace-97mff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-97mff\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:07 crc kubenswrapper[4856]: I0126 17:03:07.278188 4856 status_manager.go:851] "Failed to get status for pod" podUID="69379820-3062-4964-a8dd-8689f8cea38d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:07 crc kubenswrapper[4856]: I0126 17:03:07.278621 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:07 crc kubenswrapper[4856]: I0126 17:03:07.279010 4856 status_manager.go:851] "Failed to get status for pod" podUID="4327b726-2edc-40ad-ac96-b19a7e020048" pod="openshift-marketplace/community-operators-gdp2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gdp2n\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:07 crc kubenswrapper[4856]: I0126 17:03:07.279255 4856 status_manager.go:851] "Failed to get status for pod" podUID="5f34c6a8-6023-480c-a25e-46f8c4f3766b" pod="openshift-marketplace/certified-operators-bxhpt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-bxhpt\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:07 crc kubenswrapper[4856]: I0126 17:03:07.279446 4856 status_manager.go:851] "Failed to get status for pod" podUID="8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee" pod="openshift-marketplace/redhat-operators-lfhpz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lfhpz\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:07 crc kubenswrapper[4856]: E0126 17:03:07.599666 4856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" interval="3.2s" Jan 26 17:03:07 crc kubenswrapper[4856]: I0126 17:03:07.810841 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gdp2n" Jan 26 17:03:07 crc kubenswrapper[4856]: I0126 17:03:07.811125 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gdp2n" Jan 26 17:03:07 crc kubenswrapper[4856]: I0126 17:03:07.846286 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gdp2n" Jan 26 17:03:07 crc kubenswrapper[4856]: I0126 17:03:07.846901 4856 status_manager.go:851] "Failed to get status for pod" podUID="5f34c6a8-6023-480c-a25e-46f8c4f3766b" pod="openshift-marketplace/certified-operators-bxhpt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-bxhpt\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:07 crc kubenswrapper[4856]: I0126 17:03:07.847454 4856 status_manager.go:851] "Failed to get status for pod" podUID="8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee" pod="openshift-marketplace/redhat-operators-lfhpz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lfhpz\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:07 crc kubenswrapper[4856]: I0126 17:03:07.847657 4856 status_manager.go:851] "Failed to get status for pod" podUID="886857c0-659b-4904-b75a-c55c3f712747" pod="openshift-marketplace/redhat-marketplace-97mff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-97mff\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:07 crc kubenswrapper[4856]: I0126 17:03:07.847812 4856 status_manager.go:851] "Failed to get status for pod" podUID="69379820-3062-4964-a8dd-8689f8cea38d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:07 crc kubenswrapper[4856]: I0126 17:03:07.847976 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:07 crc kubenswrapper[4856]: I0126 17:03:07.848155 4856 status_manager.go:851] "Failed to get status for pod" podUID="4327b726-2edc-40ad-ac96-b19a7e020048" pod="openshift-marketplace/community-operators-gdp2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gdp2n\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:08 crc kubenswrapper[4856]: I0126 17:03:08.245676 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lfhpz" event={"ID":"8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee","Type":"ContainerStarted","Data":"13caf7588d65057d336550c3ff29c21a74a680754d74d9aec3ba9f9b3471b8a6"} Jan 26 17:03:08 crc kubenswrapper[4856]: I0126 17:03:08.246057 4856 status_manager.go:851] "Failed to get status for pod" podUID="69379820-3062-4964-a8dd-8689f8cea38d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:08 crc kubenswrapper[4856]: I0126 17:03:08.246328 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:08 crc kubenswrapper[4856]: I0126 17:03:08.246837 4856 status_manager.go:851] "Failed to get status for pod" podUID="4327b726-2edc-40ad-ac96-b19a7e020048" pod="openshift-marketplace/community-operators-gdp2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gdp2n\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:08 crc kubenswrapper[4856]: I0126 17:03:08.247265 4856 status_manager.go:851] "Failed to get status for pod" podUID="5f34c6a8-6023-480c-a25e-46f8c4f3766b" pod="openshift-marketplace/certified-operators-bxhpt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-bxhpt\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:08 crc kubenswrapper[4856]: I0126 17:03:08.247455 4856 status_manager.go:851] "Failed to get status for pod" podUID="8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee" pod="openshift-marketplace/redhat-operators-lfhpz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lfhpz\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:08 crc kubenswrapper[4856]: I0126 17:03:08.247695 4856 status_manager.go:851] "Failed to get status for pod" podUID="886857c0-659b-4904-b75a-c55c3f712747" pod="openshift-marketplace/redhat-marketplace-97mff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-97mff\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:08 crc kubenswrapper[4856]: E0126 17:03:08.275722 4856 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.241:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-97mff.188e56a765f7220b openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-97mff,UID:886857c0-659b-4904-b75a-c55c3f712747,APIVersion:v1,ResourceVersion:29932,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 441ms (442ms including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 17:03:00.573405707 +0000 UTC m=+276.526659678,LastTimestamp:2026-01-26 17:03:00.573405707 +0000 UTC m=+276.526659678,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 17:03:08 crc kubenswrapper[4856]: I0126 17:03:08.282454 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gdp2n" Jan 26 17:03:08 crc kubenswrapper[4856]: I0126 17:03:08.283069 4856 status_manager.go:851] "Failed to get status for pod" podUID="886857c0-659b-4904-b75a-c55c3f712747" pod="openshift-marketplace/redhat-marketplace-97mff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-97mff\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:08 crc kubenswrapper[4856]: I0126 17:03:08.283536 4856 status_manager.go:851] "Failed to get status for pod" podUID="69379820-3062-4964-a8dd-8689f8cea38d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:08 crc kubenswrapper[4856]: I0126 17:03:08.283808 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:08 crc kubenswrapper[4856]: I0126 17:03:08.284031 4856 status_manager.go:851] "Failed to get status for pod" podUID="4327b726-2edc-40ad-ac96-b19a7e020048" pod="openshift-marketplace/community-operators-gdp2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gdp2n\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:08 crc kubenswrapper[4856]: I0126 17:03:08.284273 4856 status_manager.go:851] "Failed to get status for pod" podUID="5f34c6a8-6023-480c-a25e-46f8c4f3766b" pod="openshift-marketplace/certified-operators-bxhpt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-bxhpt\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:08 crc kubenswrapper[4856]: I0126 17:03:08.284506 4856 status_manager.go:851] "Failed to get status for pod" podUID="8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee" pod="openshift-marketplace/redhat-operators-lfhpz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lfhpz\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:09 crc kubenswrapper[4856]: I0126 17:03:09.672118 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" podUID="69008ed1-f3e5-400d-852f-adbcd94199f6" containerName="oauth-openshift" containerID="cri-o://749ef964d6b168f431c27d0286b92e40d64a8b4fb99f430b33432827ee871fc9" gracePeriod=15 Jan 26 17:03:10 crc kubenswrapper[4856]: E0126 17:03:10.800384 4856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" interval="6.4s" Jan 26 17:03:11 crc kubenswrapper[4856]: I0126 17:03:11.263669 4856 generic.go:334] "Generic (PLEG): container finished" podID="69008ed1-f3e5-400d-852f-adbcd94199f6" containerID="749ef964d6b168f431c27d0286b92e40d64a8b4fb99f430b33432827ee871fc9" exitCode=0 Jan 26 17:03:11 crc kubenswrapper[4856]: I0126 17:03:11.263714 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" event={"ID":"69008ed1-f3e5-400d-852f-adbcd94199f6","Type":"ContainerDied","Data":"749ef964d6b168f431c27d0286b92e40d64a8b4fb99f430b33432827ee871fc9"} Jan 26 17:03:11 crc kubenswrapper[4856]: I0126 17:03:11.971924 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:03:11 crc kubenswrapper[4856]: I0126 17:03:11.972601 4856 status_manager.go:851] "Failed to get status for pod" podUID="5f34c6a8-6023-480c-a25e-46f8c4f3766b" pod="openshift-marketplace/certified-operators-bxhpt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-bxhpt\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:11 crc kubenswrapper[4856]: I0126 17:03:11.972908 4856 status_manager.go:851] "Failed to get status for pod" podUID="8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee" pod="openshift-marketplace/redhat-operators-lfhpz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lfhpz\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:11 crc kubenswrapper[4856]: I0126 17:03:11.973127 4856 status_manager.go:851] "Failed to get status for pod" podUID="886857c0-659b-4904-b75a-c55c3f712747" pod="openshift-marketplace/redhat-marketplace-97mff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-97mff\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:11 crc kubenswrapper[4856]: I0126 17:03:11.973382 4856 status_manager.go:851] "Failed to get status for pod" podUID="69379820-3062-4964-a8dd-8689f8cea38d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:11 crc kubenswrapper[4856]: I0126 17:03:11.973560 4856 status_manager.go:851] "Failed to get status for pod" podUID="69008ed1-f3e5-400d-852f-adbcd94199f6" pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-cb8nk\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:11 crc kubenswrapper[4856]: I0126 17:03:11.973742 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:11 crc kubenswrapper[4856]: I0126 17:03:11.974090 4856 status_manager.go:851] "Failed to get status for pod" podUID="4327b726-2edc-40ad-ac96-b19a7e020048" pod="openshift-marketplace/community-operators-gdp2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gdp2n\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.124347 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-trusted-ca-bundle\") pod \"69008ed1-f3e5-400d-852f-adbcd94199f6\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.124431 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-serving-cert\") pod \"69008ed1-f3e5-400d-852f-adbcd94199f6\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.124463 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/69008ed1-f3e5-400d-852f-adbcd94199f6-audit-dir\") pod \"69008ed1-f3e5-400d-852f-adbcd94199f6\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.124552 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kf2b2\" (UniqueName: \"kubernetes.io/projected/69008ed1-f3e5-400d-852f-adbcd94199f6-kube-api-access-kf2b2\") pod \"69008ed1-f3e5-400d-852f-adbcd94199f6\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.124612 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-session\") pod \"69008ed1-f3e5-400d-852f-adbcd94199f6\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.124643 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-user-template-error\") pod \"69008ed1-f3e5-400d-852f-adbcd94199f6\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.124682 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/69008ed1-f3e5-400d-852f-adbcd94199f6-audit-policies\") pod \"69008ed1-f3e5-400d-852f-adbcd94199f6\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.124713 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-user-template-login\") pod \"69008ed1-f3e5-400d-852f-adbcd94199f6\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.124748 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-router-certs\") pod \"69008ed1-f3e5-400d-852f-adbcd94199f6\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.124773 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-user-idp-0-file-data\") pod \"69008ed1-f3e5-400d-852f-adbcd94199f6\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.124803 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-service-ca\") pod \"69008ed1-f3e5-400d-852f-adbcd94199f6\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.124873 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-ocp-branding-template\") pod \"69008ed1-f3e5-400d-852f-adbcd94199f6\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.124910 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-user-template-provider-selection\") pod \"69008ed1-f3e5-400d-852f-adbcd94199f6\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.124942 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-cliconfig\") pod \"69008ed1-f3e5-400d-852f-adbcd94199f6\" (UID: \"69008ed1-f3e5-400d-852f-adbcd94199f6\") " Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.126033 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "69008ed1-f3e5-400d-852f-adbcd94199f6" (UID: "69008ed1-f3e5-400d-852f-adbcd94199f6"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.126161 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69008ed1-f3e5-400d-852f-adbcd94199f6-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "69008ed1-f3e5-400d-852f-adbcd94199f6" (UID: "69008ed1-f3e5-400d-852f-adbcd94199f6"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.126986 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "69008ed1-f3e5-400d-852f-adbcd94199f6" (UID: "69008ed1-f3e5-400d-852f-adbcd94199f6"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.130225 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "69008ed1-f3e5-400d-852f-adbcd94199f6" (UID: "69008ed1-f3e5-400d-852f-adbcd94199f6"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.132848 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "69008ed1-f3e5-400d-852f-adbcd94199f6" (UID: "69008ed1-f3e5-400d-852f-adbcd94199f6"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.136278 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "69008ed1-f3e5-400d-852f-adbcd94199f6" (UID: "69008ed1-f3e5-400d-852f-adbcd94199f6"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.136622 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69008ed1-f3e5-400d-852f-adbcd94199f6-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "69008ed1-f3e5-400d-852f-adbcd94199f6" (UID: "69008ed1-f3e5-400d-852f-adbcd94199f6"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.138129 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "69008ed1-f3e5-400d-852f-adbcd94199f6" (UID: "69008ed1-f3e5-400d-852f-adbcd94199f6"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.138520 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "69008ed1-f3e5-400d-852f-adbcd94199f6" (UID: "69008ed1-f3e5-400d-852f-adbcd94199f6"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.138737 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "69008ed1-f3e5-400d-852f-adbcd94199f6" (UID: "69008ed1-f3e5-400d-852f-adbcd94199f6"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.139352 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69008ed1-f3e5-400d-852f-adbcd94199f6-kube-api-access-kf2b2" (OuterVolumeSpecName: "kube-api-access-kf2b2") pod "69008ed1-f3e5-400d-852f-adbcd94199f6" (UID: "69008ed1-f3e5-400d-852f-adbcd94199f6"). InnerVolumeSpecName "kube-api-access-kf2b2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.187025 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "69008ed1-f3e5-400d-852f-adbcd94199f6" (UID: "69008ed1-f3e5-400d-852f-adbcd94199f6"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.187289 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "69008ed1-f3e5-400d-852f-adbcd94199f6" (UID: "69008ed1-f3e5-400d-852f-adbcd94199f6"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.187334 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "69008ed1-f3e5-400d-852f-adbcd94199f6" (UID: "69008ed1-f3e5-400d-852f-adbcd94199f6"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.226909 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.227200 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.227210 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.227220 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.227251 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.227262 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.227273 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.227290 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.227299 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.227313 4856 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/69008ed1-f3e5-400d-852f-adbcd94199f6-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.227323 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kf2b2\" (UniqueName: \"kubernetes.io/projected/69008ed1-f3e5-400d-852f-adbcd94199f6-kube-api-access-kf2b2\") on node \"crc\" DevicePath \"\"" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.227333 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.227341 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/69008ed1-f3e5-400d-852f-adbcd94199f6-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.227349 4856 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/69008ed1-f3e5-400d-852f-adbcd94199f6-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.271583 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" event={"ID":"69008ed1-f3e5-400d-852f-adbcd94199f6","Type":"ContainerDied","Data":"d2e5352f5a4f0bdf4461c4b926a9353c0b4a673c6263c30adba1a3d7a2d6a8ad"} Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.271656 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.271965 4856 scope.go:117] "RemoveContainer" containerID="749ef964d6b168f431c27d0286b92e40d64a8b4fb99f430b33432827ee871fc9" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.272604 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.272929 4856 status_manager.go:851] "Failed to get status for pod" podUID="4327b726-2edc-40ad-ac96-b19a7e020048" pod="openshift-marketplace/community-operators-gdp2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gdp2n\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.273244 4856 status_manager.go:851] "Failed to get status for pod" podUID="5f34c6a8-6023-480c-a25e-46f8c4f3766b" pod="openshift-marketplace/certified-operators-bxhpt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-bxhpt\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.273546 4856 status_manager.go:851] "Failed to get status for pod" podUID="8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee" pod="openshift-marketplace/redhat-operators-lfhpz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lfhpz\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.273901 4856 status_manager.go:851] "Failed to get status for pod" podUID="886857c0-659b-4904-b75a-c55c3f712747" pod="openshift-marketplace/redhat-marketplace-97mff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-97mff\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.274242 4856 status_manager.go:851] "Failed to get status for pod" podUID="69379820-3062-4964-a8dd-8689f8cea38d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.274608 4856 status_manager.go:851] "Failed to get status for pod" podUID="69008ed1-f3e5-400d-852f-adbcd94199f6" pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-cb8nk\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.295106 4856 status_manager.go:851] "Failed to get status for pod" podUID="69379820-3062-4964-a8dd-8689f8cea38d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.295649 4856 status_manager.go:851] "Failed to get status for pod" podUID="69008ed1-f3e5-400d-852f-adbcd94199f6" pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-cb8nk\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.295955 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.296230 4856 status_manager.go:851] "Failed to get status for pod" podUID="4327b726-2edc-40ad-ac96-b19a7e020048" pod="openshift-marketplace/community-operators-gdp2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gdp2n\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.296564 4856 status_manager.go:851] "Failed to get status for pod" podUID="5f34c6a8-6023-480c-a25e-46f8c4f3766b" pod="openshift-marketplace/certified-operators-bxhpt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-bxhpt\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.296821 4856 status_manager.go:851] "Failed to get status for pod" podUID="8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee" pod="openshift-marketplace/redhat-operators-lfhpz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lfhpz\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:12 crc kubenswrapper[4856]: I0126 17:03:12.297058 4856 status_manager.go:851] "Failed to get status for pod" podUID="886857c0-659b-4904-b75a-c55c3f712747" pod="openshift-marketplace/redhat-marketplace-97mff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-97mff\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:14 crc kubenswrapper[4856]: I0126 17:03:14.394946 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:03:14 crc kubenswrapper[4856]: I0126 17:03:14.395869 4856 status_manager.go:851] "Failed to get status for pod" podUID="5f34c6a8-6023-480c-a25e-46f8c4f3766b" pod="openshift-marketplace/certified-operators-bxhpt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-bxhpt\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:14 crc kubenswrapper[4856]: I0126 17:03:14.396309 4856 status_manager.go:851] "Failed to get status for pod" podUID="8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee" pod="openshift-marketplace/redhat-operators-lfhpz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lfhpz\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:14 crc kubenswrapper[4856]: I0126 17:03:14.396902 4856 status_manager.go:851] "Failed to get status for pod" podUID="886857c0-659b-4904-b75a-c55c3f712747" pod="openshift-marketplace/redhat-marketplace-97mff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-97mff\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:14 crc kubenswrapper[4856]: I0126 17:03:14.397211 4856 status_manager.go:851] "Failed to get status for pod" podUID="69379820-3062-4964-a8dd-8689f8cea38d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:14 crc kubenswrapper[4856]: I0126 17:03:14.397669 4856 status_manager.go:851] "Failed to get status for pod" podUID="69008ed1-f3e5-400d-852f-adbcd94199f6" pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-cb8nk\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:14 crc kubenswrapper[4856]: I0126 17:03:14.397969 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:14 crc kubenswrapper[4856]: I0126 17:03:14.398214 4856 status_manager.go:851] "Failed to get status for pod" podUID="4327b726-2edc-40ad-ac96-b19a7e020048" pod="openshift-marketplace/community-operators-gdp2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gdp2n\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:14 crc kubenswrapper[4856]: I0126 17:03:14.410273 4856 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="59ecd87a-c5db-446d-ad3e-cfabbd648c1d" Jan 26 17:03:14 crc kubenswrapper[4856]: I0126 17:03:14.410330 4856 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="59ecd87a-c5db-446d-ad3e-cfabbd648c1d" Jan 26 17:03:14 crc kubenswrapper[4856]: E0126 17:03:14.410881 4856 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:03:14 crc kubenswrapper[4856]: I0126 17:03:14.443518 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.311083 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.311402 4856 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="fb3c5348b8b83991cbb42255dc07d74fe50e200793efe1a7b2b2727a5c2be800" exitCode=1 Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.311498 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"fb3c5348b8b83991cbb42255dc07d74fe50e200793efe1a7b2b2727a5c2be800"} Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.312064 4856 scope.go:117] "RemoveContainer" containerID="fb3c5348b8b83991cbb42255dc07d74fe50e200793efe1a7b2b2727a5c2be800" Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.312482 4856 status_manager.go:851] "Failed to get status for pod" podUID="69379820-3062-4964-a8dd-8689f8cea38d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.312741 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lfhpz" Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.312799 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lfhpz" Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.312973 4856 status_manager.go:851] "Failed to get status for pod" podUID="69008ed1-f3e5-400d-852f-adbcd94199f6" pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-cb8nk\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.313848 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.323954 4856 status_manager.go:851] "Failed to get status for pod" podUID="4327b726-2edc-40ad-ac96-b19a7e020048" pod="openshift-marketplace/community-operators-gdp2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gdp2n\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.324997 4856 status_manager.go:851] "Failed to get status for pod" podUID="5f34c6a8-6023-480c-a25e-46f8c4f3766b" pod="openshift-marketplace/certified-operators-bxhpt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-bxhpt\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.325590 4856 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="0f264a73611fd2a7e43252672774079b6705528a6b5a493040487d1b27e3dc7e" exitCode=0 Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.325639 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"0f264a73611fd2a7e43252672774079b6705528a6b5a493040487d1b27e3dc7e"} Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.325711 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b454fc4809f34d7d4bf4b768d40eaf90ad0aebd9fa27c08031ff2c9cfd3e6b1e"} Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.325774 4856 status_manager.go:851] "Failed to get status for pod" podUID="8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee" pod="openshift-marketplace/redhat-operators-lfhpz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lfhpz\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.326115 4856 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="59ecd87a-c5db-446d-ad3e-cfabbd648c1d" Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.326134 4856 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="59ecd87a-c5db-446d-ad3e-cfabbd648c1d" Jan 26 17:03:15 crc kubenswrapper[4856]: E0126 17:03:15.326483 4856 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.326769 4856 status_manager.go:851] "Failed to get status for pod" podUID="886857c0-659b-4904-b75a-c55c3f712747" pod="openshift-marketplace/redhat-marketplace-97mff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-97mff\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.327662 4856 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.329727 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.330210 4856 status_manager.go:851] "Failed to get status for pod" podUID="4327b726-2edc-40ad-ac96-b19a7e020048" pod="openshift-marketplace/community-operators-gdp2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gdp2n\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.331980 4856 status_manager.go:851] "Failed to get status for pod" podUID="5f34c6a8-6023-480c-a25e-46f8c4f3766b" pod="openshift-marketplace/certified-operators-bxhpt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-bxhpt\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.332508 4856 status_manager.go:851] "Failed to get status for pod" podUID="8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee" pod="openshift-marketplace/redhat-operators-lfhpz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lfhpz\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.333323 4856 status_manager.go:851] "Failed to get status for pod" podUID="886857c0-659b-4904-b75a-c55c3f712747" pod="openshift-marketplace/redhat-marketplace-97mff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-97mff\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.334174 4856 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.334621 4856 status_manager.go:851] "Failed to get status for pod" podUID="69379820-3062-4964-a8dd-8689f8cea38d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.335064 4856 status_manager.go:851] "Failed to get status for pod" podUID="69008ed1-f3e5-400d-852f-adbcd94199f6" pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-cb8nk\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.360894 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lfhpz" Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.361432 4856 status_manager.go:851] "Failed to get status for pod" podUID="69008ed1-f3e5-400d-852f-adbcd94199f6" pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-cb8nk\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.361816 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.362996 4856 status_manager.go:851] "Failed to get status for pod" podUID="4327b726-2edc-40ad-ac96-b19a7e020048" pod="openshift-marketplace/community-operators-gdp2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gdp2n\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.363164 4856 status_manager.go:851] "Failed to get status for pod" podUID="5f34c6a8-6023-480c-a25e-46f8c4f3766b" pod="openshift-marketplace/certified-operators-bxhpt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-bxhpt\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.363308 4856 status_manager.go:851] "Failed to get status for pod" podUID="8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee" pod="openshift-marketplace/redhat-operators-lfhpz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lfhpz\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.363453 4856 status_manager.go:851] "Failed to get status for pod" podUID="886857c0-659b-4904-b75a-c55c3f712747" pod="openshift-marketplace/redhat-marketplace-97mff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-97mff\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.363957 4856 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.365171 4856 status_manager.go:851] "Failed to get status for pod" podUID="69379820-3062-4964-a8dd-8689f8cea38d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.442912 4856 status_manager.go:851] "Failed to get status for pod" podUID="69008ed1-f3e5-400d-852f-adbcd94199f6" pod="openshift-authentication/oauth-openshift-558db77b4-cb8nk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-cb8nk\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.443463 4856 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.443639 4856 status_manager.go:851] "Failed to get status for pod" podUID="4327b726-2edc-40ad-ac96-b19a7e020048" pod="openshift-marketplace/community-operators-gdp2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gdp2n\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.443793 4856 status_manager.go:851] "Failed to get status for pod" podUID="5f34c6a8-6023-480c-a25e-46f8c4f3766b" pod="openshift-marketplace/certified-operators-bxhpt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-bxhpt\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.443987 4856 status_manager.go:851] "Failed to get status for pod" podUID="8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee" pod="openshift-marketplace/redhat-operators-lfhpz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lfhpz\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.444163 4856 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.444414 4856 status_manager.go:851] "Failed to get status for pod" podUID="886857c0-659b-4904-b75a-c55c3f712747" pod="openshift-marketplace/redhat-marketplace-97mff" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-97mff\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.444637 4856 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:15 crc kubenswrapper[4856]: I0126 17:03:15.444797 4856 status_manager.go:851] "Failed to get status for pod" podUID="69379820-3062-4964-a8dd-8689f8cea38d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:15 crc kubenswrapper[4856]: E0126 17:03:15.623611 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:03:15Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:03:15Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:03:15Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T17:03:15Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:15 crc kubenswrapper[4856]: E0126 17:03:15.623912 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:15 crc kubenswrapper[4856]: E0126 17:03:15.624177 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:15 crc kubenswrapper[4856]: E0126 17:03:15.624398 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:15 crc kubenswrapper[4856]: E0126 17:03:15.624785 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 26 17:03:15 crc kubenswrapper[4856]: E0126 17:03:15.624816 4856 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 17:03:16 crc kubenswrapper[4856]: I0126 17:03:16.335203 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 17:03:16 crc kubenswrapper[4856]: I0126 17:03:16.335331 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"80273e46ebcca0298cf3f63e0e0aabceb330a19fc9f5399a09ac60d75bf71e10"} Jan 26 17:03:16 crc kubenswrapper[4856]: I0126 17:03:16.337621 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b2d6eca8f4d2929c26b180b5010215bd6e8b2a977125d6a5f2c070fabb3ddee8"} Jan 26 17:03:16 crc kubenswrapper[4856]: I0126 17:03:16.337670 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"498cf9d3ffe151aa5744d939a0d77cf3881afe48bae0b6e4620ce96a1cd0014c"} Jan 26 17:03:16 crc kubenswrapper[4856]: I0126 17:03:16.391809 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lfhpz" Jan 26 17:03:17 crc kubenswrapper[4856]: I0126 17:03:17.345606 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8cdb077ef075ef70025deaf712b29a0b97f686ddd45a68c8741b2541bb6a5fad"} Jan 26 17:03:19 crc kubenswrapper[4856]: I0126 17:03:19.282392 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" podUID="cfa40861-cc08-4145-a185-6a3fb07eaabe" containerName="registry" containerID="cri-o://fc8e05e1e87fe66232302aff71c23d6b6c36b366751f113f41815a46bc948eb9" gracePeriod=30 Jan 26 17:03:19 crc kubenswrapper[4856]: I0126 17:03:19.361418 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"aa8e49724230f0e647de2fe59a2cd9370bd08515b0927af18b69b392f4dfd64b"} Jan 26 17:03:19 crc kubenswrapper[4856]: I0126 17:03:19.423491 4856 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-wxbdh container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.30:5000/healthz\": dial tcp 10.217.0.30:5000: connect: connection refused" start-of-body= Jan 26 17:03:19 crc kubenswrapper[4856]: I0126 17:03:19.423608 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" podUID="cfa40861-cc08-4145-a185-6a3fb07eaabe" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.30:5000/healthz\": dial tcp 10.217.0.30:5000: connect: connection refused" Jan 26 17:03:20 crc kubenswrapper[4856]: I0126 17:03:20.368868 4856 generic.go:334] "Generic (PLEG): container finished" podID="cfa40861-cc08-4145-a185-6a3fb07eaabe" containerID="fc8e05e1e87fe66232302aff71c23d6b6c36b366751f113f41815a46bc948eb9" exitCode=0 Jan 26 17:03:20 crc kubenswrapper[4856]: I0126 17:03:20.369003 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" event={"ID":"cfa40861-cc08-4145-a185-6a3fb07eaabe","Type":"ContainerDied","Data":"fc8e05e1e87fe66232302aff71c23d6b6c36b366751f113f41815a46bc948eb9"} Jan 26 17:03:20 crc kubenswrapper[4856]: I0126 17:03:20.373114 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0c3a9c85a068db8311cb2e29673d265412a4ac44329f3e596c834ede0310716f"} Jan 26 17:03:20 crc kubenswrapper[4856]: I0126 17:03:20.373337 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:03:20 crc kubenswrapper[4856]: I0126 17:03:20.373452 4856 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="59ecd87a-c5db-446d-ad3e-cfabbd648c1d" Jan 26 17:03:20 crc kubenswrapper[4856]: I0126 17:03:20.373475 4856 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="59ecd87a-c5db-446d-ad3e-cfabbd648c1d" Jan 26 17:03:20 crc kubenswrapper[4856]: I0126 17:03:20.382386 4856 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:03:20 crc kubenswrapper[4856]: I0126 17:03:20.583904 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 17:03:20 crc kubenswrapper[4856]: I0126 17:03:20.891575 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:03:20 crc kubenswrapper[4856]: I0126 17:03:20.946826 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cfa40861-cc08-4145-a185-6a3fb07eaabe-bound-sa-token\") pod \"cfa40861-cc08-4145-a185-6a3fb07eaabe\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " Jan 26 17:03:20 crc kubenswrapper[4856]: I0126 17:03:20.946875 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cfa40861-cc08-4145-a185-6a3fb07eaabe-trusted-ca\") pod \"cfa40861-cc08-4145-a185-6a3fb07eaabe\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " Jan 26 17:03:20 crc kubenswrapper[4856]: I0126 17:03:20.946897 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cfa40861-cc08-4145-a185-6a3fb07eaabe-ca-trust-extracted\") pod \"cfa40861-cc08-4145-a185-6a3fb07eaabe\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " Jan 26 17:03:20 crc kubenswrapper[4856]: I0126 17:03:20.946939 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tf448\" (UniqueName: \"kubernetes.io/projected/cfa40861-cc08-4145-a185-6a3fb07eaabe-kube-api-access-tf448\") pod \"cfa40861-cc08-4145-a185-6a3fb07eaabe\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " Jan 26 17:03:20 crc kubenswrapper[4856]: I0126 17:03:20.947145 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"cfa40861-cc08-4145-a185-6a3fb07eaabe\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " Jan 26 17:03:20 crc kubenswrapper[4856]: I0126 17:03:20.947215 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cfa40861-cc08-4145-a185-6a3fb07eaabe-installation-pull-secrets\") pod \"cfa40861-cc08-4145-a185-6a3fb07eaabe\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " Jan 26 17:03:20 crc kubenswrapper[4856]: I0126 17:03:20.947236 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cfa40861-cc08-4145-a185-6a3fb07eaabe-registry-tls\") pod \"cfa40861-cc08-4145-a185-6a3fb07eaabe\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " Jan 26 17:03:20 crc kubenswrapper[4856]: I0126 17:03:20.947254 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cfa40861-cc08-4145-a185-6a3fb07eaabe-registry-certificates\") pod \"cfa40861-cc08-4145-a185-6a3fb07eaabe\" (UID: \"cfa40861-cc08-4145-a185-6a3fb07eaabe\") " Jan 26 17:03:20 crc kubenswrapper[4856]: I0126 17:03:20.947783 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfa40861-cc08-4145-a185-6a3fb07eaabe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "cfa40861-cc08-4145-a185-6a3fb07eaabe" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:03:20 crc kubenswrapper[4856]: I0126 17:03:20.948162 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfa40861-cc08-4145-a185-6a3fb07eaabe-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "cfa40861-cc08-4145-a185-6a3fb07eaabe" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:03:20 crc kubenswrapper[4856]: I0126 17:03:20.952851 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfa40861-cc08-4145-a185-6a3fb07eaabe-kube-api-access-tf448" (OuterVolumeSpecName: "kube-api-access-tf448") pod "cfa40861-cc08-4145-a185-6a3fb07eaabe" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe"). InnerVolumeSpecName "kube-api-access-tf448". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:03:20 crc kubenswrapper[4856]: I0126 17:03:20.953080 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfa40861-cc08-4145-a185-6a3fb07eaabe-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "cfa40861-cc08-4145-a185-6a3fb07eaabe" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:03:20 crc kubenswrapper[4856]: I0126 17:03:20.953299 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfa40861-cc08-4145-a185-6a3fb07eaabe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "cfa40861-cc08-4145-a185-6a3fb07eaabe" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:03:20 crc kubenswrapper[4856]: I0126 17:03:20.953445 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfa40861-cc08-4145-a185-6a3fb07eaabe-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "cfa40861-cc08-4145-a185-6a3fb07eaabe" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:03:20 crc kubenswrapper[4856]: I0126 17:03:20.956725 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "cfa40861-cc08-4145-a185-6a3fb07eaabe" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 17:03:20 crc kubenswrapper[4856]: I0126 17:03:20.982714 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfa40861-cc08-4145-a185-6a3fb07eaabe-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "cfa40861-cc08-4145-a185-6a3fb07eaabe" (UID: "cfa40861-cc08-4145-a185-6a3fb07eaabe"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:03:21 crc kubenswrapper[4856]: I0126 17:03:21.048838 4856 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cfa40861-cc08-4145-a185-6a3fb07eaabe-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 17:03:21 crc kubenswrapper[4856]: I0126 17:03:21.048885 4856 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cfa40861-cc08-4145-a185-6a3fb07eaabe-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 17:03:21 crc kubenswrapper[4856]: I0126 17:03:21.048900 4856 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cfa40861-cc08-4145-a185-6a3fb07eaabe-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 17:03:21 crc kubenswrapper[4856]: I0126 17:03:21.048911 4856 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cfa40861-cc08-4145-a185-6a3fb07eaabe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 17:03:21 crc kubenswrapper[4856]: I0126 17:03:21.048923 4856 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cfa40861-cc08-4145-a185-6a3fb07eaabe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 17:03:21 crc kubenswrapper[4856]: I0126 17:03:21.048935 4856 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cfa40861-cc08-4145-a185-6a3fb07eaabe-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 17:03:21 crc kubenswrapper[4856]: I0126 17:03:21.048946 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tf448\" (UniqueName: \"kubernetes.io/projected/cfa40861-cc08-4145-a185-6a3fb07eaabe-kube-api-access-tf448\") on node \"crc\" DevicePath \"\"" Jan 26 17:03:21 crc kubenswrapper[4856]: I0126 17:03:21.380048 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" event={"ID":"cfa40861-cc08-4145-a185-6a3fb07eaabe","Type":"ContainerDied","Data":"ae7df2de181ac684cadd8c52c3b8878c72703f16549d24e92a2fc45b186ce717"} Jan 26 17:03:21 crc kubenswrapper[4856]: I0126 17:03:21.380062 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wxbdh" Jan 26 17:03:21 crc kubenswrapper[4856]: I0126 17:03:21.380107 4856 scope.go:117] "RemoveContainer" containerID="fc8e05e1e87fe66232302aff71c23d6b6c36b366751f113f41815a46bc948eb9" Jan 26 17:03:21 crc kubenswrapper[4856]: I0126 17:03:21.380352 4856 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="59ecd87a-c5db-446d-ad3e-cfabbd648c1d" Jan 26 17:03:21 crc kubenswrapper[4856]: I0126 17:03:21.380369 4856 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="59ecd87a-c5db-446d-ad3e-cfabbd648c1d" Jan 26 17:03:22 crc kubenswrapper[4856]: I0126 17:03:22.879606 4856 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="1c355228-5a24-45a2-9876-b4f0732a65d0" Jan 26 17:03:23 crc kubenswrapper[4856]: I0126 17:03:23.508739 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 17:03:23 crc kubenswrapper[4856]: I0126 17:03:23.515290 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 17:03:24 crc kubenswrapper[4856]: I0126 17:03:24.939304 4856 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 26 17:03:30 crc kubenswrapper[4856]: I0126 17:03:30.589298 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 17:03:32 crc kubenswrapper[4856]: I0126 17:03:32.758903 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 17:03:32 crc kubenswrapper[4856]: I0126 17:03:32.844037 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 17:03:33 crc kubenswrapper[4856]: I0126 17:03:33.402891 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 17:03:33 crc kubenswrapper[4856]: I0126 17:03:33.854541 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 17:03:33 crc kubenswrapper[4856]: I0126 17:03:33.901157 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 17:03:33 crc kubenswrapper[4856]: I0126 17:03:33.995367 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 17:03:34 crc kubenswrapper[4856]: I0126 17:03:34.278793 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 17:03:34 crc kubenswrapper[4856]: I0126 17:03:34.530747 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 17:03:34 crc kubenswrapper[4856]: I0126 17:03:34.539666 4856 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","pod69379820-3062-4964-a8dd-8689f8cea38d"] err="unable to destroy cgroup paths for cgroup [kubepods pod69379820-3062-4964-a8dd-8689f8cea38d] : Timed out while waiting for systemd to remove kubepods-pod69379820_3062_4964_a8dd_8689f8cea38d.slice" Jan 26 17:03:34 crc kubenswrapper[4856]: E0126 17:03:34.539998 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods pod69379820-3062-4964-a8dd-8689f8cea38d] : unable to destroy cgroup paths for cgroup [kubepods pod69379820-3062-4964-a8dd-8689f8cea38d] : Timed out while waiting for systemd to remove kubepods-pod69379820_3062_4964_a8dd_8689f8cea38d.slice" pod="openshift-kube-apiserver/installer-9-crc" podUID="69379820-3062-4964-a8dd-8689f8cea38d" Jan 26 17:03:34 crc kubenswrapper[4856]: I0126 17:03:34.647628 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 17:03:34 crc kubenswrapper[4856]: I0126 17:03:34.805366 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 17:03:34 crc kubenswrapper[4856]: I0126 17:03:34.809673 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 17:03:34 crc kubenswrapper[4856]: I0126 17:03:34.854428 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 17:03:34 crc kubenswrapper[4856]: I0126 17:03:34.894005 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 17:03:35 crc kubenswrapper[4856]: I0126 17:03:35.004192 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 17:03:35 crc kubenswrapper[4856]: I0126 17:03:35.035288 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 17:03:35 crc kubenswrapper[4856]: I0126 17:03:35.202142 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 17:03:35 crc kubenswrapper[4856]: I0126 17:03:35.328365 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 17:03:35 crc kubenswrapper[4856]: I0126 17:03:35.526370 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 17:03:35 crc kubenswrapper[4856]: I0126 17:03:35.593299 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 17:03:35 crc kubenswrapper[4856]: I0126 17:03:35.637103 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 17:03:35 crc kubenswrapper[4856]: I0126 17:03:35.766985 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 17:03:35 crc kubenswrapper[4856]: I0126 17:03:35.997628 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 17:03:36 crc kubenswrapper[4856]: I0126 17:03:36.027705 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 17:03:36 crc kubenswrapper[4856]: I0126 17:03:36.031989 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 17:03:36 crc kubenswrapper[4856]: I0126 17:03:36.129043 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 17:03:36 crc kubenswrapper[4856]: I0126 17:03:36.170563 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 17:03:36 crc kubenswrapper[4856]: I0126 17:03:36.171976 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 17:03:36 crc kubenswrapper[4856]: I0126 17:03:36.259288 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 17:03:36 crc kubenswrapper[4856]: I0126 17:03:36.527088 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 17:03:36 crc kubenswrapper[4856]: I0126 17:03:36.545665 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 17:03:36 crc kubenswrapper[4856]: I0126 17:03:36.550380 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 17:03:36 crc kubenswrapper[4856]: I0126 17:03:36.630298 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 17:03:36 crc kubenswrapper[4856]: I0126 17:03:36.647645 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 17:03:36 crc kubenswrapper[4856]: I0126 17:03:36.654972 4856 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 17:03:36 crc kubenswrapper[4856]: I0126 17:03:36.810107 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 17:03:36 crc kubenswrapper[4856]: I0126 17:03:36.836259 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 17:03:36 crc kubenswrapper[4856]: I0126 17:03:36.866455 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 17:03:36 crc kubenswrapper[4856]: I0126 17:03:36.967431 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 17:03:37 crc kubenswrapper[4856]: I0126 17:03:37.044551 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 17:03:37 crc kubenswrapper[4856]: I0126 17:03:37.143128 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 17:03:37 crc kubenswrapper[4856]: I0126 17:03:37.172434 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 17:03:37 crc kubenswrapper[4856]: I0126 17:03:37.192564 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 17:03:37 crc kubenswrapper[4856]: I0126 17:03:37.200011 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 17:03:37 crc kubenswrapper[4856]: I0126 17:03:37.254171 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 17:03:37 crc kubenswrapper[4856]: I0126 17:03:37.459294 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 17:03:37 crc kubenswrapper[4856]: I0126 17:03:37.523615 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 17:03:37 crc kubenswrapper[4856]: I0126 17:03:37.548772 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 17:03:37 crc kubenswrapper[4856]: I0126 17:03:37.563986 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 17:03:37 crc kubenswrapper[4856]: I0126 17:03:37.567956 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 17:03:37 crc kubenswrapper[4856]: I0126 17:03:37.570732 4856 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 17:03:37 crc kubenswrapper[4856]: I0126 17:03:37.591192 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 17:03:37 crc kubenswrapper[4856]: I0126 17:03:37.675782 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 17:03:37 crc kubenswrapper[4856]: I0126 17:03:37.692820 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 17:03:37 crc kubenswrapper[4856]: I0126 17:03:37.709070 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 17:03:37 crc kubenswrapper[4856]: I0126 17:03:37.712369 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 17:03:37 crc kubenswrapper[4856]: I0126 17:03:37.764482 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 17:03:37 crc kubenswrapper[4856]: I0126 17:03:37.816619 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 17:03:37 crc kubenswrapper[4856]: I0126 17:03:37.846995 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 17:03:37 crc kubenswrapper[4856]: I0126 17:03:37.908759 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 17:03:37 crc kubenswrapper[4856]: I0126 17:03:37.945172 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 17:03:38 crc kubenswrapper[4856]: I0126 17:03:38.026161 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 17:03:38 crc kubenswrapper[4856]: I0126 17:03:38.055222 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 17:03:38 crc kubenswrapper[4856]: I0126 17:03:38.066228 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 17:03:38 crc kubenswrapper[4856]: I0126 17:03:38.074446 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 17:03:38 crc kubenswrapper[4856]: I0126 17:03:38.254241 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 17:03:38 crc kubenswrapper[4856]: I0126 17:03:38.270790 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 17:03:38 crc kubenswrapper[4856]: I0126 17:03:38.346569 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 17:03:38 crc kubenswrapper[4856]: I0126 17:03:38.360031 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 17:03:38 crc kubenswrapper[4856]: I0126 17:03:38.434095 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 17:03:38 crc kubenswrapper[4856]: I0126 17:03:38.466431 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 17:03:38 crc kubenswrapper[4856]: I0126 17:03:38.468738 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 17:03:38 crc kubenswrapper[4856]: I0126 17:03:38.508989 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 17:03:38 crc kubenswrapper[4856]: I0126 17:03:38.539174 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 17:03:38 crc kubenswrapper[4856]: I0126 17:03:38.544340 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 17:03:38 crc kubenswrapper[4856]: I0126 17:03:38.617969 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 17:03:38 crc kubenswrapper[4856]: I0126 17:03:38.630632 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 17:03:38 crc kubenswrapper[4856]: I0126 17:03:38.831607 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 17:03:38 crc kubenswrapper[4856]: I0126 17:03:38.832326 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 17:03:38 crc kubenswrapper[4856]: I0126 17:03:38.965764 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 17:03:38 crc kubenswrapper[4856]: I0126 17:03:38.987698 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 17:03:38 crc kubenswrapper[4856]: I0126 17:03:38.988432 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.087404 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.130282 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.171662 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.175385 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.389581 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.474960 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.538428 4856 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.538802 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-97mff" podStartSLOduration=41.079029315 podStartE2EDuration="43.538785079s" podCreationTimestamp="2026-01-26 17:02:56 +0000 UTC" firstStartedPulling="2026-01-26 17:02:58.113635403 +0000 UTC m=+274.066889384" lastFinishedPulling="2026-01-26 17:03:00.573391167 +0000 UTC m=+276.526645148" observedRunningTime="2026-01-26 17:03:22.893187657 +0000 UTC m=+298.846441648" watchObservedRunningTime="2026-01-26 17:03:39.538785079 +0000 UTC m=+315.492039060" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.539677 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lfhpz" podStartSLOduration=33.877414329 podStartE2EDuration="45.539673774s" podCreationTimestamp="2026-01-26 17:02:54 +0000 UTC" firstStartedPulling="2026-01-26 17:02:55.972045452 +0000 UTC m=+271.925299433" lastFinishedPulling="2026-01-26 17:03:07.634304897 +0000 UTC m=+283.587558878" observedRunningTime="2026-01-26 17:03:22.876729534 +0000 UTC m=+298.829983525" watchObservedRunningTime="2026-01-26 17:03:39.539673774 +0000 UTC m=+315.492927755" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.540788 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=39.540784266 podStartE2EDuration="39.540784266s" podCreationTimestamp="2026-01-26 17:03:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:03:22.960055926 +0000 UTC m=+298.913309907" watchObservedRunningTime="2026-01-26 17:03:39.540784266 +0000 UTC m=+315.494038247" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.540997 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gdp2n" podStartSLOduration=40.073558069 podStartE2EDuration="42.540992512s" podCreationTimestamp="2026-01-26 17:02:57 +0000 UTC" firstStartedPulling="2026-01-26 17:02:59.121501492 +0000 UTC m=+275.074755473" lastFinishedPulling="2026-01-26 17:03:01.588935945 +0000 UTC m=+277.542189916" observedRunningTime="2026-01-26 17:03:22.843797389 +0000 UTC m=+298.797051400" watchObservedRunningTime="2026-01-26 17:03:39.540992512 +0000 UTC m=+315.494246493" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.542299 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cb8nk","openshift-image-registry/image-registry-697d97f7c8-wxbdh","openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.542348 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-5d4455474f-tnwpf"] Jan 26 17:03:39 crc kubenswrapper[4856]: E0126 17:03:39.542563 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69379820-3062-4964-a8dd-8689f8cea38d" containerName="installer" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.542581 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="69379820-3062-4964-a8dd-8689f8cea38d" containerName="installer" Jan 26 17:03:39 crc kubenswrapper[4856]: E0126 17:03:39.542592 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69008ed1-f3e5-400d-852f-adbcd94199f6" containerName="oauth-openshift" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.542598 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="69008ed1-f3e5-400d-852f-adbcd94199f6" containerName="oauth-openshift" Jan 26 17:03:39 crc kubenswrapper[4856]: E0126 17:03:39.542608 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfa40861-cc08-4145-a185-6a3fb07eaabe" containerName="registry" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.542614 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfa40861-cc08-4145-a185-6a3fb07eaabe" containerName="registry" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.542723 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="69379820-3062-4964-a8dd-8689f8cea38d" containerName="installer" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.542733 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="69008ed1-f3e5-400d-852f-adbcd94199f6" containerName="oauth-openshift" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.542746 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfa40861-cc08-4145-a185-6a3fb07eaabe" containerName="registry" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.543108 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.546574 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.547069 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.547074 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.547619 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.547873 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.547876 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.548220 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.548408 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.549846 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.549863 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.549979 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.551010 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.551220 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.560498 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.568218 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.588665 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.588696 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.58866649 podStartE2EDuration="19.58866649s" podCreationTimestamp="2026-01-26 17:03:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:03:39.58132612 +0000 UTC m=+315.534580111" watchObservedRunningTime="2026-01-26 17:03:39.58866649 +0000 UTC m=+315.541920491" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.591564 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.651364 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-v4-0-config-system-session\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.651861 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-v4-0-config-user-template-login\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.652106 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.652295 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-audit-policies\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.652641 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.652811 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-v4-0-config-system-router-certs\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.652936 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-audit-dir\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.653075 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-v4-0-config-user-template-error\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.653220 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-v4-0-config-system-service-ca\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.653338 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.653775 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.654086 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.654134 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.654286 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-956kp\" (UniqueName: \"kubernetes.io/projected/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-kube-api-access-956kp\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.742592 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.755423 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.755747 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-v4-0-config-system-router-certs\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.755865 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-audit-dir\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.755955 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-v4-0-config-user-template-error\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.756041 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-v4-0-config-system-service-ca\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.756119 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.756241 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.756323 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.756428 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.756543 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-956kp\" (UniqueName: \"kubernetes.io/projected/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-kube-api-access-956kp\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.756632 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-v4-0-config-system-session\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.756724 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-v4-0-config-user-template-login\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.756826 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.756910 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-audit-policies\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.757461 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-v4-0-config-system-service-ca\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.757469 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-audit-dir\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.757685 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-audit-policies\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.757771 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.758418 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.763632 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-v4-0-config-user-template-error\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.763699 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.763658 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-v4-0-config-system-router-certs\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.763653 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-v4-0-config-user-template-login\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.764302 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-v4-0-config-system-session\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.766338 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.767292 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.767301 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.781141 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.787932 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-956kp\" (UniqueName: \"kubernetes.io/projected/34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a-kube-api-access-956kp\") pod \"oauth-openshift-5d4455474f-tnwpf\" (UID: \"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a\") " pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.829431 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.841829 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.890096 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:39 crc kubenswrapper[4856]: I0126 17:03:39.904548 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 17:03:40 crc kubenswrapper[4856]: I0126 17:03:40.097032 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 17:03:40 crc kubenswrapper[4856]: I0126 17:03:40.111477 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 17:03:40 crc kubenswrapper[4856]: I0126 17:03:40.207150 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 17:03:40 crc kubenswrapper[4856]: I0126 17:03:40.207510 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 17:03:40 crc kubenswrapper[4856]: I0126 17:03:40.292644 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 17:03:40 crc kubenswrapper[4856]: I0126 17:03:40.336621 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 17:03:40 crc kubenswrapper[4856]: I0126 17:03:40.352045 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 17:03:40 crc kubenswrapper[4856]: I0126 17:03:40.379848 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 17:03:40 crc kubenswrapper[4856]: I0126 17:03:40.386862 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 17:03:40 crc kubenswrapper[4856]: I0126 17:03:40.437553 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 17:03:40 crc kubenswrapper[4856]: I0126 17:03:40.446682 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 17:03:40 crc kubenswrapper[4856]: I0126 17:03:40.534301 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 17:03:40 crc kubenswrapper[4856]: I0126 17:03:40.590842 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 17:03:40 crc kubenswrapper[4856]: I0126 17:03:40.614767 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 17:03:40 crc kubenswrapper[4856]: I0126 17:03:40.653362 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 17:03:40 crc kubenswrapper[4856]: I0126 17:03:40.673389 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 17:03:40 crc kubenswrapper[4856]: I0126 17:03:40.681718 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 17:03:40 crc kubenswrapper[4856]: I0126 17:03:40.691009 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 17:03:40 crc kubenswrapper[4856]: I0126 17:03:40.695944 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 17:03:40 crc kubenswrapper[4856]: I0126 17:03:40.702046 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 17:03:40 crc kubenswrapper[4856]: I0126 17:03:40.811949 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 17:03:40 crc kubenswrapper[4856]: I0126 17:03:40.920564 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 17:03:40 crc kubenswrapper[4856]: I0126 17:03:40.993421 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 17:03:41 crc kubenswrapper[4856]: I0126 17:03:41.070145 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 17:03:41 crc kubenswrapper[4856]: I0126 17:03:41.099006 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 17:03:41 crc kubenswrapper[4856]: I0126 17:03:41.114802 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 17:03:41 crc kubenswrapper[4856]: I0126 17:03:41.207020 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 17:03:41 crc kubenswrapper[4856]: I0126 17:03:41.235459 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 17:03:41 crc kubenswrapper[4856]: I0126 17:03:41.386121 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 17:03:41 crc kubenswrapper[4856]: I0126 17:03:41.401655 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69008ed1-f3e5-400d-852f-adbcd94199f6" path="/var/lib/kubelet/pods/69008ed1-f3e5-400d-852f-adbcd94199f6/volumes" Jan 26 17:03:41 crc kubenswrapper[4856]: I0126 17:03:41.402569 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfa40861-cc08-4145-a185-6a3fb07eaabe" path="/var/lib/kubelet/pods/cfa40861-cc08-4145-a185-6a3fb07eaabe/volumes" Jan 26 17:03:41 crc kubenswrapper[4856]: I0126 17:03:41.507581 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 17:03:41 crc kubenswrapper[4856]: I0126 17:03:41.552775 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 17:03:41 crc kubenswrapper[4856]: I0126 17:03:41.676293 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 17:03:41 crc kubenswrapper[4856]: I0126 17:03:41.681606 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 17:03:41 crc kubenswrapper[4856]: I0126 17:03:41.768400 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 17:03:41 crc kubenswrapper[4856]: I0126 17:03:41.774815 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 17:03:41 crc kubenswrapper[4856]: I0126 17:03:41.784967 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 17:03:41 crc kubenswrapper[4856]: I0126 17:03:41.856110 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 17:03:41 crc kubenswrapper[4856]: I0126 17:03:41.869392 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 17:03:41 crc kubenswrapper[4856]: I0126 17:03:41.870980 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 17:03:41 crc kubenswrapper[4856]: I0126 17:03:41.967208 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 17:03:42 crc kubenswrapper[4856]: I0126 17:03:42.090905 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 17:03:42 crc kubenswrapper[4856]: I0126 17:03:42.100956 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 17:03:42 crc kubenswrapper[4856]: I0126 17:03:42.117194 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 17:03:42 crc kubenswrapper[4856]: I0126 17:03:42.201162 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 17:03:42 crc kubenswrapper[4856]: I0126 17:03:42.427050 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 17:03:42 crc kubenswrapper[4856]: I0126 17:03:42.444486 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 17:03:42 crc kubenswrapper[4856]: I0126 17:03:42.455579 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 17:03:42 crc kubenswrapper[4856]: I0126 17:03:42.495994 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 17:03:42 crc kubenswrapper[4856]: I0126 17:03:42.511041 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 17:03:42 crc kubenswrapper[4856]: I0126 17:03:42.572175 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 17:03:42 crc kubenswrapper[4856]: I0126 17:03:42.600918 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 17:03:42 crc kubenswrapper[4856]: I0126 17:03:42.731826 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 17:03:42 crc kubenswrapper[4856]: I0126 17:03:42.739656 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 17:03:42 crc kubenswrapper[4856]: I0126 17:03:42.744153 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 17:03:42 crc kubenswrapper[4856]: I0126 17:03:42.750278 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 17:03:42 crc kubenswrapper[4856]: I0126 17:03:42.784743 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 17:03:42 crc kubenswrapper[4856]: I0126 17:03:42.826290 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 17:03:42 crc kubenswrapper[4856]: I0126 17:03:42.847334 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 17:03:43 crc kubenswrapper[4856]: I0126 17:03:43.106890 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 17:03:43 crc kubenswrapper[4856]: I0126 17:03:43.273916 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 17:03:43 crc kubenswrapper[4856]: I0126 17:03:43.344054 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 17:03:43 crc kubenswrapper[4856]: I0126 17:03:43.361784 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 17:03:43 crc kubenswrapper[4856]: I0126 17:03:43.413162 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 17:03:43 crc kubenswrapper[4856]: I0126 17:03:43.448067 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 17:03:43 crc kubenswrapper[4856]: I0126 17:03:43.467016 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 17:03:43 crc kubenswrapper[4856]: I0126 17:03:43.572175 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 17:03:43 crc kubenswrapper[4856]: I0126 17:03:43.615051 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 17:03:43 crc kubenswrapper[4856]: I0126 17:03:43.691321 4856 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 17:03:43 crc kubenswrapper[4856]: I0126 17:03:43.707851 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 17:03:43 crc kubenswrapper[4856]: I0126 17:03:43.846910 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 17:03:43 crc kubenswrapper[4856]: I0126 17:03:43.882330 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 17:03:43 crc kubenswrapper[4856]: I0126 17:03:43.890218 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 17:03:43 crc kubenswrapper[4856]: I0126 17:03:43.892283 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 17:03:44 crc kubenswrapper[4856]: I0126 17:03:44.057943 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 17:03:44 crc kubenswrapper[4856]: I0126 17:03:44.063123 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 17:03:44 crc kubenswrapper[4856]: I0126 17:03:44.116621 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 17:03:44 crc kubenswrapper[4856]: I0126 17:03:44.184034 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 17:03:44 crc kubenswrapper[4856]: I0126 17:03:44.187346 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 17:03:44 crc kubenswrapper[4856]: I0126 17:03:44.240205 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 17:03:44 crc kubenswrapper[4856]: I0126 17:03:44.296384 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 17:03:44 crc kubenswrapper[4856]: I0126 17:03:44.302847 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 17:03:44 crc kubenswrapper[4856]: I0126 17:03:44.368562 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 17:03:44 crc kubenswrapper[4856]: I0126 17:03:44.410473 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 17:03:44 crc kubenswrapper[4856]: I0126 17:03:44.444545 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:03:44 crc kubenswrapper[4856]: I0126 17:03:44.444594 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:03:44 crc kubenswrapper[4856]: I0126 17:03:44.450655 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:03:44 crc kubenswrapper[4856]: I0126 17:03:44.458659 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 17:03:44 crc kubenswrapper[4856]: I0126 17:03:44.509055 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 17:03:44 crc kubenswrapper[4856]: I0126 17:03:44.619013 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 17:03:44 crc kubenswrapper[4856]: I0126 17:03:44.628947 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 17:03:44 crc kubenswrapper[4856]: I0126 17:03:44.642602 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 17:03:44 crc kubenswrapper[4856]: I0126 17:03:44.669216 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 17:03:44 crc kubenswrapper[4856]: I0126 17:03:44.717699 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 17:03:44 crc kubenswrapper[4856]: I0126 17:03:44.718203 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 17:03:44 crc kubenswrapper[4856]: I0126 17:03:44.724828 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 17:03:44 crc kubenswrapper[4856]: I0126 17:03:44.886059 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 17:03:44 crc kubenswrapper[4856]: I0126 17:03:44.918692 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 17:03:45 crc kubenswrapper[4856]: I0126 17:03:45.006189 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 17:03:45 crc kubenswrapper[4856]: I0126 17:03:45.006211 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 17:03:45 crc kubenswrapper[4856]: I0126 17:03:45.032771 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 17:03:45 crc kubenswrapper[4856]: I0126 17:03:45.043484 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 17:03:45 crc kubenswrapper[4856]: I0126 17:03:45.155896 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 17:03:45 crc kubenswrapper[4856]: I0126 17:03:45.192913 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 17:03:45 crc kubenswrapper[4856]: I0126 17:03:45.220787 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 17:03:45 crc kubenswrapper[4856]: I0126 17:03:45.252971 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 17:03:45 crc kubenswrapper[4856]: I0126 17:03:45.349091 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 17:03:45 crc kubenswrapper[4856]: I0126 17:03:45.638498 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 17:03:45 crc kubenswrapper[4856]: I0126 17:03:45.748334 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 17:03:45 crc kubenswrapper[4856]: I0126 17:03:45.819406 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 17:03:45 crc kubenswrapper[4856]: I0126 17:03:45.858922 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 17:03:45 crc kubenswrapper[4856]: I0126 17:03:45.913757 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 17:03:45 crc kubenswrapper[4856]: I0126 17:03:45.933672 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 17:03:45 crc kubenswrapper[4856]: I0126 17:03:45.957838 4856 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 17:03:46 crc kubenswrapper[4856]: I0126 17:03:46.001944 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 17:03:46 crc kubenswrapper[4856]: I0126 17:03:46.075863 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 17:03:46 crc kubenswrapper[4856]: I0126 17:03:46.222434 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 17:03:46 crc kubenswrapper[4856]: I0126 17:03:46.398323 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 17:03:46 crc kubenswrapper[4856]: I0126 17:03:46.401310 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5d4455474f-tnwpf"] Jan 26 17:03:46 crc kubenswrapper[4856]: I0126 17:03:46.436347 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 17:03:46 crc kubenswrapper[4856]: I0126 17:03:46.487373 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 17:03:46 crc kubenswrapper[4856]: I0126 17:03:46.570503 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5d4455474f-tnwpf"] Jan 26 17:03:46 crc kubenswrapper[4856]: I0126 17:03:46.629515 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 17:03:46 crc kubenswrapper[4856]: I0126 17:03:46.636002 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 17:03:46 crc kubenswrapper[4856]: I0126 17:03:46.729919 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 17:03:46 crc kubenswrapper[4856]: I0126 17:03:46.730148 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 17:03:46 crc kubenswrapper[4856]: I0126 17:03:46.730995 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 17:03:46 crc kubenswrapper[4856]: I0126 17:03:46.741615 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" event={"ID":"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a","Type":"ContainerStarted","Data":"b0baa3f3e061df600bbe107f483216185a8feed4fc98462c2b563cfb8b2419a1"} Jan 26 17:03:46 crc kubenswrapper[4856]: I0126 17:03:46.784005 4856 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 17:03:46 crc kubenswrapper[4856]: I0126 17:03:46.787840 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 17:03:46 crc kubenswrapper[4856]: I0126 17:03:46.796830 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 17:03:46 crc kubenswrapper[4856]: I0126 17:03:46.940772 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 17:03:46 crc kubenswrapper[4856]: I0126 17:03:46.981682 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 17:03:46 crc kubenswrapper[4856]: I0126 17:03:46.985889 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 17:03:47 crc kubenswrapper[4856]: I0126 17:03:47.058127 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 17:03:47 crc kubenswrapper[4856]: I0126 17:03:47.082881 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 17:03:47 crc kubenswrapper[4856]: I0126 17:03:47.110882 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 17:03:47 crc kubenswrapper[4856]: I0126 17:03:47.232009 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 17:03:47 crc kubenswrapper[4856]: I0126 17:03:47.271316 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 17:03:47 crc kubenswrapper[4856]: I0126 17:03:47.391287 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 17:03:47 crc kubenswrapper[4856]: I0126 17:03:47.486696 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 17:03:47 crc kubenswrapper[4856]: I0126 17:03:47.561234 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 17:03:47 crc kubenswrapper[4856]: I0126 17:03:47.592368 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 17:03:47 crc kubenswrapper[4856]: I0126 17:03:47.749472 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" event={"ID":"34c90e28-cdc1-4cd3-a7e1-d84a17c9ce5a","Type":"ContainerStarted","Data":"ec41a4e78361c1dbb8d9be4c452cd1e41744d581006301daa2c79b9405a0c8c1"} Jan 26 17:03:47 crc kubenswrapper[4856]: I0126 17:03:47.749945 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:47 crc kubenswrapper[4856]: I0126 17:03:47.755096 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" Jan 26 17:03:47 crc kubenswrapper[4856]: I0126 17:03:47.769381 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5d4455474f-tnwpf" podStartSLOduration=63.769357411 podStartE2EDuration="1m3.769357411s" podCreationTimestamp="2026-01-26 17:02:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:03:47.768118315 +0000 UTC m=+323.721372316" watchObservedRunningTime="2026-01-26 17:03:47.769357411 +0000 UTC m=+323.722611392" Jan 26 17:03:47 crc kubenswrapper[4856]: I0126 17:03:47.850610 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 17:03:47 crc kubenswrapper[4856]: I0126 17:03:47.908322 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 17:03:47 crc kubenswrapper[4856]: I0126 17:03:47.996729 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 17:03:48 crc kubenswrapper[4856]: I0126 17:03:48.060113 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 17:03:48 crc kubenswrapper[4856]: I0126 17:03:48.084512 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 17:03:48 crc kubenswrapper[4856]: I0126 17:03:48.272758 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 17:03:48 crc kubenswrapper[4856]: I0126 17:03:48.335245 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 17:03:48 crc kubenswrapper[4856]: I0126 17:03:48.595865 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 17:03:55 crc kubenswrapper[4856]: I0126 17:03:55.541564 4856 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 17:03:55 crc kubenswrapper[4856]: I0126 17:03:55.541963 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://b6f864463a7443541f4490597f84a8f03e6d8c1a587e47002bb795632c0df2d6" gracePeriod=5 Jan 26 17:04:01 crc kubenswrapper[4856]: I0126 17:04:01.108361 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 17:04:01 crc kubenswrapper[4856]: I0126 17:04:01.108817 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 17:04:01 crc kubenswrapper[4856]: I0126 17:04:01.157088 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 17:04:01 crc kubenswrapper[4856]: I0126 17:04:01.157155 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 17:04:01 crc kubenswrapper[4856]: I0126 17:04:01.157182 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 17:04:01 crc kubenswrapper[4856]: I0126 17:04:01.157206 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 17:04:01 crc kubenswrapper[4856]: I0126 17:04:01.157238 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 17:04:01 crc kubenswrapper[4856]: I0126 17:04:01.157266 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:04:01 crc kubenswrapper[4856]: I0126 17:04:01.157302 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:04:01 crc kubenswrapper[4856]: I0126 17:04:01.157282 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:04:01 crc kubenswrapper[4856]: I0126 17:04:01.157241 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:04:01 crc kubenswrapper[4856]: I0126 17:04:01.157550 4856 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 17:04:01 crc kubenswrapper[4856]: I0126 17:04:01.157568 4856 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 17:04:01 crc kubenswrapper[4856]: I0126 17:04:01.157580 4856 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 26 17:04:01 crc kubenswrapper[4856]: I0126 17:04:01.157593 4856 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 26 17:04:01 crc kubenswrapper[4856]: I0126 17:04:01.165293 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:04:01 crc kubenswrapper[4856]: I0126 17:04:01.258414 4856 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 17:04:01 crc kubenswrapper[4856]: I0126 17:04:01.406497 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 26 17:04:01 crc kubenswrapper[4856]: I0126 17:04:01.407078 4856 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 26 17:04:01 crc kubenswrapper[4856]: I0126 17:04:01.436870 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 17:04:01 crc kubenswrapper[4856]: I0126 17:04:01.436929 4856 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="b2d37eac-1f35-4184-aaf4-fe3e28069de2" Jan 26 17:04:01 crc kubenswrapper[4856]: I0126 17:04:01.439550 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 17:04:01 crc kubenswrapper[4856]: I0126 17:04:01.439598 4856 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="b2d37eac-1f35-4184-aaf4-fe3e28069de2" Jan 26 17:04:01 crc kubenswrapper[4856]: I0126 17:04:01.840018 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 17:04:01 crc kubenswrapper[4856]: I0126 17:04:01.840326 4856 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="b6f864463a7443541f4490597f84a8f03e6d8c1a587e47002bb795632c0df2d6" exitCode=137 Jan 26 17:04:01 crc kubenswrapper[4856]: I0126 17:04:01.840389 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 17:04:01 crc kubenswrapper[4856]: I0126 17:04:01.840426 4856 scope.go:117] "RemoveContainer" containerID="b6f864463a7443541f4490597f84a8f03e6d8c1a587e47002bb795632c0df2d6" Jan 26 17:04:01 crc kubenswrapper[4856]: I0126 17:04:01.859307 4856 scope.go:117] "RemoveContainer" containerID="b6f864463a7443541f4490597f84a8f03e6d8c1a587e47002bb795632c0df2d6" Jan 26 17:04:01 crc kubenswrapper[4856]: E0126 17:04:01.859880 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6f864463a7443541f4490597f84a8f03e6d8c1a587e47002bb795632c0df2d6\": container with ID starting with b6f864463a7443541f4490597f84a8f03e6d8c1a587e47002bb795632c0df2d6 not found: ID does not exist" containerID="b6f864463a7443541f4490597f84a8f03e6d8c1a587e47002bb795632c0df2d6" Jan 26 17:04:01 crc kubenswrapper[4856]: I0126 17:04:01.859944 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6f864463a7443541f4490597f84a8f03e6d8c1a587e47002bb795632c0df2d6"} err="failed to get container status \"b6f864463a7443541f4490597f84a8f03e6d8c1a587e47002bb795632c0df2d6\": rpc error: code = NotFound desc = could not find container \"b6f864463a7443541f4490597f84a8f03e6d8c1a587e47002bb795632c0df2d6\": container with ID starting with b6f864463a7443541f4490597f84a8f03e6d8c1a587e47002bb795632c0df2d6 not found: ID does not exist" Jan 26 17:04:09 crc kubenswrapper[4856]: I0126 17:04:09.477776 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-54b988dd69-ljwqg"] Jan 26 17:04:09 crc kubenswrapper[4856]: I0126 17:04:09.478636 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-54b988dd69-ljwqg" podUID="de17aec3-fab1-4a5e-bd46-6a1545b93a89" containerName="controller-manager" containerID="cri-o://5dd5f652c6d735efc7f4d83a862afc4f09c88afb0921e6ecb304af76698cc9c8" gracePeriod=30 Jan 26 17:04:09 crc kubenswrapper[4856]: I0126 17:04:09.578665 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cb449784d-bqprm"] Jan 26 17:04:09 crc kubenswrapper[4856]: I0126 17:04:09.578908 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-cb449784d-bqprm" podUID="54dee8cd-259a-4c9f-9e56-fbd0ea167f46" containerName="route-controller-manager" containerID="cri-o://3157d6dd787fe30eefd4db24c3f3619f52444746080c65c699eab5d0d02ab52a" gracePeriod=30 Jan 26 17:04:09 crc kubenswrapper[4856]: I0126 17:04:09.911329 4856 generic.go:334] "Generic (PLEG): container finished" podID="de17aec3-fab1-4a5e-bd46-6a1545b93a89" containerID="5dd5f652c6d735efc7f4d83a862afc4f09c88afb0921e6ecb304af76698cc9c8" exitCode=0 Jan 26 17:04:09 crc kubenswrapper[4856]: I0126 17:04:09.911805 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54b988dd69-ljwqg" event={"ID":"de17aec3-fab1-4a5e-bd46-6a1545b93a89","Type":"ContainerDied","Data":"5dd5f652c6d735efc7f4d83a862afc4f09c88afb0921e6ecb304af76698cc9c8"} Jan 26 17:04:09 crc kubenswrapper[4856]: I0126 17:04:09.914534 4856 generic.go:334] "Generic (PLEG): container finished" podID="54dee8cd-259a-4c9f-9e56-fbd0ea167f46" containerID="3157d6dd787fe30eefd4db24c3f3619f52444746080c65c699eab5d0d02ab52a" exitCode=0 Jan 26 17:04:09 crc kubenswrapper[4856]: I0126 17:04:09.914615 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cb449784d-bqprm" event={"ID":"54dee8cd-259a-4c9f-9e56-fbd0ea167f46","Type":"ContainerDied","Data":"3157d6dd787fe30eefd4db24c3f3619f52444746080c65c699eab5d0d02ab52a"} Jan 26 17:04:09 crc kubenswrapper[4856]: I0126 17:04:09.948624 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54b988dd69-ljwqg" Jan 26 17:04:10 crc kubenswrapper[4856]: I0126 17:04:10.070746 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cb449784d-bqprm" Jan 26 17:04:10 crc kubenswrapper[4856]: I0126 17:04:10.106763 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxzpz\" (UniqueName: \"kubernetes.io/projected/de17aec3-fab1-4a5e-bd46-6a1545b93a89-kube-api-access-sxzpz\") pod \"de17aec3-fab1-4a5e-bd46-6a1545b93a89\" (UID: \"de17aec3-fab1-4a5e-bd46-6a1545b93a89\") " Jan 26 17:04:10 crc kubenswrapper[4856]: I0126 17:04:10.106849 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de17aec3-fab1-4a5e-bd46-6a1545b93a89-proxy-ca-bundles\") pod \"de17aec3-fab1-4a5e-bd46-6a1545b93a89\" (UID: \"de17aec3-fab1-4a5e-bd46-6a1545b93a89\") " Jan 26 17:04:10 crc kubenswrapper[4856]: I0126 17:04:10.106893 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de17aec3-fab1-4a5e-bd46-6a1545b93a89-serving-cert\") pod \"de17aec3-fab1-4a5e-bd46-6a1545b93a89\" (UID: \"de17aec3-fab1-4a5e-bd46-6a1545b93a89\") " Jan 26 17:04:10 crc kubenswrapper[4856]: I0126 17:04:10.106926 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de17aec3-fab1-4a5e-bd46-6a1545b93a89-client-ca\") pod \"de17aec3-fab1-4a5e-bd46-6a1545b93a89\" (UID: \"de17aec3-fab1-4a5e-bd46-6a1545b93a89\") " Jan 26 17:04:10 crc kubenswrapper[4856]: I0126 17:04:10.107680 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de17aec3-fab1-4a5e-bd46-6a1545b93a89-client-ca" (OuterVolumeSpecName: "client-ca") pod "de17aec3-fab1-4a5e-bd46-6a1545b93a89" (UID: "de17aec3-fab1-4a5e-bd46-6a1545b93a89"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:04:10 crc kubenswrapper[4856]: I0126 17:04:10.107706 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de17aec3-fab1-4a5e-bd46-6a1545b93a89-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "de17aec3-fab1-4a5e-bd46-6a1545b93a89" (UID: "de17aec3-fab1-4a5e-bd46-6a1545b93a89"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:04:10 crc kubenswrapper[4856]: I0126 17:04:10.106982 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de17aec3-fab1-4a5e-bd46-6a1545b93a89-config\") pod \"de17aec3-fab1-4a5e-bd46-6a1545b93a89\" (UID: \"de17aec3-fab1-4a5e-bd46-6a1545b93a89\") " Jan 26 17:04:10 crc kubenswrapper[4856]: I0126 17:04:10.107908 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de17aec3-fab1-4a5e-bd46-6a1545b93a89-config" (OuterVolumeSpecName: "config") pod "de17aec3-fab1-4a5e-bd46-6a1545b93a89" (UID: "de17aec3-fab1-4a5e-bd46-6a1545b93a89"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:04:10 crc kubenswrapper[4856]: I0126 17:04:10.108501 4856 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de17aec3-fab1-4a5e-bd46-6a1545b93a89-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 17:04:10 crc kubenswrapper[4856]: I0126 17:04:10.108542 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de17aec3-fab1-4a5e-bd46-6a1545b93a89-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:04:10 crc kubenswrapper[4856]: I0126 17:04:10.108561 4856 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de17aec3-fab1-4a5e-bd46-6a1545b93a89-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 17:04:10 crc kubenswrapper[4856]: I0126 17:04:10.111941 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de17aec3-fab1-4a5e-bd46-6a1545b93a89-kube-api-access-sxzpz" (OuterVolumeSpecName: "kube-api-access-sxzpz") pod "de17aec3-fab1-4a5e-bd46-6a1545b93a89" (UID: "de17aec3-fab1-4a5e-bd46-6a1545b93a89"). InnerVolumeSpecName "kube-api-access-sxzpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:04:10 crc kubenswrapper[4856]: I0126 17:04:10.112707 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de17aec3-fab1-4a5e-bd46-6a1545b93a89-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "de17aec3-fab1-4a5e-bd46-6a1545b93a89" (UID: "de17aec3-fab1-4a5e-bd46-6a1545b93a89"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:04:10 crc kubenswrapper[4856]: I0126 17:04:10.209490 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54dee8cd-259a-4c9f-9e56-fbd0ea167f46-config\") pod \"54dee8cd-259a-4c9f-9e56-fbd0ea167f46\" (UID: \"54dee8cd-259a-4c9f-9e56-fbd0ea167f46\") " Jan 26 17:04:10 crc kubenswrapper[4856]: I0126 17:04:10.209566 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vhl4\" (UniqueName: \"kubernetes.io/projected/54dee8cd-259a-4c9f-9e56-fbd0ea167f46-kube-api-access-9vhl4\") pod \"54dee8cd-259a-4c9f-9e56-fbd0ea167f46\" (UID: \"54dee8cd-259a-4c9f-9e56-fbd0ea167f46\") " Jan 26 17:04:10 crc kubenswrapper[4856]: I0126 17:04:10.209613 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54dee8cd-259a-4c9f-9e56-fbd0ea167f46-serving-cert\") pod \"54dee8cd-259a-4c9f-9e56-fbd0ea167f46\" (UID: \"54dee8cd-259a-4c9f-9e56-fbd0ea167f46\") " Jan 26 17:04:10 crc kubenswrapper[4856]: I0126 17:04:10.209656 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54dee8cd-259a-4c9f-9e56-fbd0ea167f46-client-ca\") pod \"54dee8cd-259a-4c9f-9e56-fbd0ea167f46\" (UID: \"54dee8cd-259a-4c9f-9e56-fbd0ea167f46\") " Jan 26 17:04:10 crc kubenswrapper[4856]: I0126 17:04:10.209877 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxzpz\" (UniqueName: \"kubernetes.io/projected/de17aec3-fab1-4a5e-bd46-6a1545b93a89-kube-api-access-sxzpz\") on node \"crc\" DevicePath \"\"" Jan 26 17:04:10 crc kubenswrapper[4856]: I0126 17:04:10.209894 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de17aec3-fab1-4a5e-bd46-6a1545b93a89-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:04:10 crc kubenswrapper[4856]: I0126 17:04:10.210695 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54dee8cd-259a-4c9f-9e56-fbd0ea167f46-config" (OuterVolumeSpecName: "config") pod "54dee8cd-259a-4c9f-9e56-fbd0ea167f46" (UID: "54dee8cd-259a-4c9f-9e56-fbd0ea167f46"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:04:10 crc kubenswrapper[4856]: I0126 17:04:10.210696 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54dee8cd-259a-4c9f-9e56-fbd0ea167f46-client-ca" (OuterVolumeSpecName: "client-ca") pod "54dee8cd-259a-4c9f-9e56-fbd0ea167f46" (UID: "54dee8cd-259a-4c9f-9e56-fbd0ea167f46"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:04:10 crc kubenswrapper[4856]: I0126 17:04:10.212753 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54dee8cd-259a-4c9f-9e56-fbd0ea167f46-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "54dee8cd-259a-4c9f-9e56-fbd0ea167f46" (UID: "54dee8cd-259a-4c9f-9e56-fbd0ea167f46"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:04:10 crc kubenswrapper[4856]: I0126 17:04:10.215141 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54dee8cd-259a-4c9f-9e56-fbd0ea167f46-kube-api-access-9vhl4" (OuterVolumeSpecName: "kube-api-access-9vhl4") pod "54dee8cd-259a-4c9f-9e56-fbd0ea167f46" (UID: "54dee8cd-259a-4c9f-9e56-fbd0ea167f46"). InnerVolumeSpecName "kube-api-access-9vhl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:04:10 crc kubenswrapper[4856]: I0126 17:04:10.310709 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54dee8cd-259a-4c9f-9e56-fbd0ea167f46-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:04:10 crc kubenswrapper[4856]: I0126 17:04:10.310751 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vhl4\" (UniqueName: \"kubernetes.io/projected/54dee8cd-259a-4c9f-9e56-fbd0ea167f46-kube-api-access-9vhl4\") on node \"crc\" DevicePath \"\"" Jan 26 17:04:10 crc kubenswrapper[4856]: I0126 17:04:10.310763 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54dee8cd-259a-4c9f-9e56-fbd0ea167f46-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:04:10 crc kubenswrapper[4856]: I0126 17:04:10.310777 4856 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54dee8cd-259a-4c9f-9e56-fbd0ea167f46-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 17:04:10 crc kubenswrapper[4856]: I0126 17:04:10.921181 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cb449784d-bqprm" event={"ID":"54dee8cd-259a-4c9f-9e56-fbd0ea167f46","Type":"ContainerDied","Data":"960ff00bedf28636eb04c4f352e2d6d2e33a5ceb9800e901e018103cd5ac5859"} Jan 26 17:04:10 crc kubenswrapper[4856]: I0126 17:04:10.921213 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cb449784d-bqprm" Jan 26 17:04:10 crc kubenswrapper[4856]: I0126 17:04:10.921256 4856 scope.go:117] "RemoveContainer" containerID="3157d6dd787fe30eefd4db24c3f3619f52444746080c65c699eab5d0d02ab52a" Jan 26 17:04:10 crc kubenswrapper[4856]: I0126 17:04:10.924919 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54b988dd69-ljwqg" event={"ID":"de17aec3-fab1-4a5e-bd46-6a1545b93a89","Type":"ContainerDied","Data":"1bccd3720f328e7d0b92fc36bcc35726a97ebe6a8070f5cbb1608de57071e2d0"} Jan 26 17:04:10 crc kubenswrapper[4856]: I0126 17:04:10.925083 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54b988dd69-ljwqg" Jan 26 17:04:10 crc kubenswrapper[4856]: I0126 17:04:10.940678 4856 scope.go:117] "RemoveContainer" containerID="5dd5f652c6d735efc7f4d83a862afc4f09c88afb0921e6ecb304af76698cc9c8" Jan 26 17:04:10 crc kubenswrapper[4856]: I0126 17:04:10.950049 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cb449784d-bqprm"] Jan 26 17:04:10 crc kubenswrapper[4856]: I0126 17:04:10.954906 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cb449784d-bqprm"] Jan 26 17:04:10 crc kubenswrapper[4856]: I0126 17:04:10.958878 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-54b988dd69-ljwqg"] Jan 26 17:04:10 crc kubenswrapper[4856]: I0126 17:04:10.962133 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-54b988dd69-ljwqg"] Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.394180 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-787548c685-7dskp"] Jan 26 17:04:11 crc kubenswrapper[4856]: E0126 17:04:11.394697 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54dee8cd-259a-4c9f-9e56-fbd0ea167f46" containerName="route-controller-manager" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.394720 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="54dee8cd-259a-4c9f-9e56-fbd0ea167f46" containerName="route-controller-manager" Jan 26 17:04:11 crc kubenswrapper[4856]: E0126 17:04:11.394739 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de17aec3-fab1-4a5e-bd46-6a1545b93a89" containerName="controller-manager" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.394749 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="de17aec3-fab1-4a5e-bd46-6a1545b93a89" containerName="controller-manager" Jan 26 17:04:11 crc kubenswrapper[4856]: E0126 17:04:11.394775 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.394786 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.395050 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.395071 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="de17aec3-fab1-4a5e-bd46-6a1545b93a89" containerName="controller-manager" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.395095 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="54dee8cd-259a-4c9f-9e56-fbd0ea167f46" containerName="route-controller-manager" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.399641 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-787548c685-7dskp" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.403559 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.403809 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.403891 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.404217 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.404413 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.404474 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.405148 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54dee8cd-259a-4c9f-9e56-fbd0ea167f46" path="/var/lib/kubelet/pods/54dee8cd-259a-4c9f-9e56-fbd0ea167f46/volumes" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.406170 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de17aec3-fab1-4a5e-bd46-6a1545b93a89" path="/var/lib/kubelet/pods/de17aec3-fab1-4a5e-bd46-6a1545b93a89/volumes" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.406857 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c4b475647-2td8w"] Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.407884 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c4b475647-2td8w"] Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.408021 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4b475647-2td8w" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.411077 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.411276 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.411440 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.411497 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.411756 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.420907 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.421852 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-787548c685-7dskp"] Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.423782 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.527105 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f710888e-8c73-4d02-8ab4-f530b2562d8e-proxy-ca-bundles\") pod \"controller-manager-787548c685-7dskp\" (UID: \"f710888e-8c73-4d02-8ab4-f530b2562d8e\") " pod="openshift-controller-manager/controller-manager-787548c685-7dskp" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.527157 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f710888e-8c73-4d02-8ab4-f530b2562d8e-client-ca\") pod \"controller-manager-787548c685-7dskp\" (UID: \"f710888e-8c73-4d02-8ab4-f530b2562d8e\") " pod="openshift-controller-manager/controller-manager-787548c685-7dskp" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.527576 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4edd939-e622-4a82-9c29-a115042d697e-config\") pod \"route-controller-manager-5c4b475647-2td8w\" (UID: \"e4edd939-e622-4a82-9c29-a115042d697e\") " pod="openshift-route-controller-manager/route-controller-manager-5c4b475647-2td8w" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.527656 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x9xc\" (UniqueName: \"kubernetes.io/projected/f710888e-8c73-4d02-8ab4-f530b2562d8e-kube-api-access-7x9xc\") pod \"controller-manager-787548c685-7dskp\" (UID: \"f710888e-8c73-4d02-8ab4-f530b2562d8e\") " pod="openshift-controller-manager/controller-manager-787548c685-7dskp" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.527773 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f710888e-8c73-4d02-8ab4-f530b2562d8e-config\") pod \"controller-manager-787548c685-7dskp\" (UID: \"f710888e-8c73-4d02-8ab4-f530b2562d8e\") " pod="openshift-controller-manager/controller-manager-787548c685-7dskp" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.527830 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnhw8\" (UniqueName: \"kubernetes.io/projected/e4edd939-e622-4a82-9c29-a115042d697e-kube-api-access-dnhw8\") pod \"route-controller-manager-5c4b475647-2td8w\" (UID: \"e4edd939-e622-4a82-9c29-a115042d697e\") " pod="openshift-route-controller-manager/route-controller-manager-5c4b475647-2td8w" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.527860 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4edd939-e622-4a82-9c29-a115042d697e-serving-cert\") pod \"route-controller-manager-5c4b475647-2td8w\" (UID: \"e4edd939-e622-4a82-9c29-a115042d697e\") " pod="openshift-route-controller-manager/route-controller-manager-5c4b475647-2td8w" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.527898 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e4edd939-e622-4a82-9c29-a115042d697e-client-ca\") pod \"route-controller-manager-5c4b475647-2td8w\" (UID: \"e4edd939-e622-4a82-9c29-a115042d697e\") " pod="openshift-route-controller-manager/route-controller-manager-5c4b475647-2td8w" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.527950 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f710888e-8c73-4d02-8ab4-f530b2562d8e-serving-cert\") pod \"controller-manager-787548c685-7dskp\" (UID: \"f710888e-8c73-4d02-8ab4-f530b2562d8e\") " pod="openshift-controller-manager/controller-manager-787548c685-7dskp" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.628841 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4edd939-e622-4a82-9c29-a115042d697e-config\") pod \"route-controller-manager-5c4b475647-2td8w\" (UID: \"e4edd939-e622-4a82-9c29-a115042d697e\") " pod="openshift-route-controller-manager/route-controller-manager-5c4b475647-2td8w" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.628898 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x9xc\" (UniqueName: \"kubernetes.io/projected/f710888e-8c73-4d02-8ab4-f530b2562d8e-kube-api-access-7x9xc\") pod \"controller-manager-787548c685-7dskp\" (UID: \"f710888e-8c73-4d02-8ab4-f530b2562d8e\") " pod="openshift-controller-manager/controller-manager-787548c685-7dskp" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.628934 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f710888e-8c73-4d02-8ab4-f530b2562d8e-config\") pod \"controller-manager-787548c685-7dskp\" (UID: \"f710888e-8c73-4d02-8ab4-f530b2562d8e\") " pod="openshift-controller-manager/controller-manager-787548c685-7dskp" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.628988 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnhw8\" (UniqueName: \"kubernetes.io/projected/e4edd939-e622-4a82-9c29-a115042d697e-kube-api-access-dnhw8\") pod \"route-controller-manager-5c4b475647-2td8w\" (UID: \"e4edd939-e622-4a82-9c29-a115042d697e\") " pod="openshift-route-controller-manager/route-controller-manager-5c4b475647-2td8w" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.629012 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4edd939-e622-4a82-9c29-a115042d697e-serving-cert\") pod \"route-controller-manager-5c4b475647-2td8w\" (UID: \"e4edd939-e622-4a82-9c29-a115042d697e\") " pod="openshift-route-controller-manager/route-controller-manager-5c4b475647-2td8w" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.629034 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e4edd939-e622-4a82-9c29-a115042d697e-client-ca\") pod \"route-controller-manager-5c4b475647-2td8w\" (UID: \"e4edd939-e622-4a82-9c29-a115042d697e\") " pod="openshift-route-controller-manager/route-controller-manager-5c4b475647-2td8w" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.629075 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f710888e-8c73-4d02-8ab4-f530b2562d8e-serving-cert\") pod \"controller-manager-787548c685-7dskp\" (UID: \"f710888e-8c73-4d02-8ab4-f530b2562d8e\") " pod="openshift-controller-manager/controller-manager-787548c685-7dskp" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.629102 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f710888e-8c73-4d02-8ab4-f530b2562d8e-proxy-ca-bundles\") pod \"controller-manager-787548c685-7dskp\" (UID: \"f710888e-8c73-4d02-8ab4-f530b2562d8e\") " pod="openshift-controller-manager/controller-manager-787548c685-7dskp" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.629128 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f710888e-8c73-4d02-8ab4-f530b2562d8e-client-ca\") pod \"controller-manager-787548c685-7dskp\" (UID: \"f710888e-8c73-4d02-8ab4-f530b2562d8e\") " pod="openshift-controller-manager/controller-manager-787548c685-7dskp" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.630168 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f710888e-8c73-4d02-8ab4-f530b2562d8e-client-ca\") pod \"controller-manager-787548c685-7dskp\" (UID: \"f710888e-8c73-4d02-8ab4-f530b2562d8e\") " pod="openshift-controller-manager/controller-manager-787548c685-7dskp" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.630242 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e4edd939-e622-4a82-9c29-a115042d697e-client-ca\") pod \"route-controller-manager-5c4b475647-2td8w\" (UID: \"e4edd939-e622-4a82-9c29-a115042d697e\") " pod="openshift-route-controller-manager/route-controller-manager-5c4b475647-2td8w" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.630398 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f710888e-8c73-4d02-8ab4-f530b2562d8e-config\") pod \"controller-manager-787548c685-7dskp\" (UID: \"f710888e-8c73-4d02-8ab4-f530b2562d8e\") " pod="openshift-controller-manager/controller-manager-787548c685-7dskp" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.630511 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4edd939-e622-4a82-9c29-a115042d697e-config\") pod \"route-controller-manager-5c4b475647-2td8w\" (UID: \"e4edd939-e622-4a82-9c29-a115042d697e\") " pod="openshift-route-controller-manager/route-controller-manager-5c4b475647-2td8w" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.633902 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f710888e-8c73-4d02-8ab4-f530b2562d8e-proxy-ca-bundles\") pod \"controller-manager-787548c685-7dskp\" (UID: \"f710888e-8c73-4d02-8ab4-f530b2562d8e\") " pod="openshift-controller-manager/controller-manager-787548c685-7dskp" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.634116 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f710888e-8c73-4d02-8ab4-f530b2562d8e-serving-cert\") pod \"controller-manager-787548c685-7dskp\" (UID: \"f710888e-8c73-4d02-8ab4-f530b2562d8e\") " pod="openshift-controller-manager/controller-manager-787548c685-7dskp" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.638206 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4edd939-e622-4a82-9c29-a115042d697e-serving-cert\") pod \"route-controller-manager-5c4b475647-2td8w\" (UID: \"e4edd939-e622-4a82-9c29-a115042d697e\") " pod="openshift-route-controller-manager/route-controller-manager-5c4b475647-2td8w" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.644876 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x9xc\" (UniqueName: \"kubernetes.io/projected/f710888e-8c73-4d02-8ab4-f530b2562d8e-kube-api-access-7x9xc\") pod \"controller-manager-787548c685-7dskp\" (UID: \"f710888e-8c73-4d02-8ab4-f530b2562d8e\") " pod="openshift-controller-manager/controller-manager-787548c685-7dskp" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.645847 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnhw8\" (UniqueName: \"kubernetes.io/projected/e4edd939-e622-4a82-9c29-a115042d697e-kube-api-access-dnhw8\") pod \"route-controller-manager-5c4b475647-2td8w\" (UID: \"e4edd939-e622-4a82-9c29-a115042d697e\") " pod="openshift-route-controller-manager/route-controller-manager-5c4b475647-2td8w" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.732945 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-787548c685-7dskp" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.745064 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4b475647-2td8w" Jan 26 17:04:11 crc kubenswrapper[4856]: I0126 17:04:11.960735 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c4b475647-2td8w"] Jan 26 17:04:12 crc kubenswrapper[4856]: I0126 17:04:12.004068 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-787548c685-7dskp"] Jan 26 17:04:12 crc kubenswrapper[4856]: W0126 17:04:12.008993 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf710888e_8c73_4d02_8ab4_f530b2562d8e.slice/crio-558fe0d45eb56f6266e1083119bf551e438b00b093d926a293bf75ae0823f5a2 WatchSource:0}: Error finding container 558fe0d45eb56f6266e1083119bf551e438b00b093d926a293bf75ae0823f5a2: Status 404 returned error can't find the container with id 558fe0d45eb56f6266e1083119bf551e438b00b093d926a293bf75ae0823f5a2 Jan 26 17:04:12 crc kubenswrapper[4856]: I0126 17:04:12.941472 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4b475647-2td8w" event={"ID":"e4edd939-e622-4a82-9c29-a115042d697e","Type":"ContainerStarted","Data":"3686bdddd3134b3ea5d523f1b197d25f5fe8b842e6cb39fce2a80b12a3c66dbe"} Jan 26 17:04:12 crc kubenswrapper[4856]: I0126 17:04:12.941893 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4b475647-2td8w" event={"ID":"e4edd939-e622-4a82-9c29-a115042d697e","Type":"ContainerStarted","Data":"af1dee1a1d954458157b743dec4d52a5ba26c12dbe51718f2f422dd7e022e190"} Jan 26 17:04:12 crc kubenswrapper[4856]: I0126 17:04:12.941917 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5c4b475647-2td8w" Jan 26 17:04:12 crc kubenswrapper[4856]: I0126 17:04:12.943517 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-787548c685-7dskp" event={"ID":"f710888e-8c73-4d02-8ab4-f530b2562d8e","Type":"ContainerStarted","Data":"d59eb1b03a2fffa887687d53c3e17cf812845ab76f4c776e709172dc2d904988"} Jan 26 17:04:12 crc kubenswrapper[4856]: I0126 17:04:12.943564 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-787548c685-7dskp" event={"ID":"f710888e-8c73-4d02-8ab4-f530b2562d8e","Type":"ContainerStarted","Data":"558fe0d45eb56f6266e1083119bf551e438b00b093d926a293bf75ae0823f5a2"} Jan 26 17:04:12 crc kubenswrapper[4856]: I0126 17:04:12.943927 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-787548c685-7dskp" Jan 26 17:04:12 crc kubenswrapper[4856]: I0126 17:04:12.946825 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5c4b475647-2td8w" Jan 26 17:04:12 crc kubenswrapper[4856]: I0126 17:04:12.953973 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-787548c685-7dskp" Jan 26 17:04:12 crc kubenswrapper[4856]: I0126 17:04:12.960921 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5c4b475647-2td8w" podStartSLOduration=3.960893315 podStartE2EDuration="3.960893315s" podCreationTimestamp="2026-01-26 17:04:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:04:12.95869668 +0000 UTC m=+348.911950681" watchObservedRunningTime="2026-01-26 17:04:12.960893315 +0000 UTC m=+348.914147306" Jan 26 17:04:13 crc kubenswrapper[4856]: I0126 17:04:13.017621 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-787548c685-7dskp" podStartSLOduration=4.017588714 podStartE2EDuration="4.017588714s" podCreationTimestamp="2026-01-26 17:04:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:04:12.996200427 +0000 UTC m=+348.949454418" watchObservedRunningTime="2026-01-26 17:04:13.017588714 +0000 UTC m=+348.970842695" Jan 26 17:04:17 crc kubenswrapper[4856]: I0126 17:04:17.346750 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 17:04:29 crc kubenswrapper[4856]: I0126 17:04:29.659973 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-787548c685-7dskp"] Jan 26 17:04:29 crc kubenswrapper[4856]: I0126 17:04:29.662332 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-787548c685-7dskp" podUID="f710888e-8c73-4d02-8ab4-f530b2562d8e" containerName="controller-manager" containerID="cri-o://d59eb1b03a2fffa887687d53c3e17cf812845ab76f4c776e709172dc2d904988" gracePeriod=30 Jan 26 17:04:30 crc kubenswrapper[4856]: I0126 17:04:30.052393 4856 generic.go:334] "Generic (PLEG): container finished" podID="f710888e-8c73-4d02-8ab4-f530b2562d8e" containerID="d59eb1b03a2fffa887687d53c3e17cf812845ab76f4c776e709172dc2d904988" exitCode=0 Jan 26 17:04:30 crc kubenswrapper[4856]: I0126 17:04:30.052456 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-787548c685-7dskp" event={"ID":"f710888e-8c73-4d02-8ab4-f530b2562d8e","Type":"ContainerDied","Data":"d59eb1b03a2fffa887687d53c3e17cf812845ab76f4c776e709172dc2d904988"} Jan 26 17:04:30 crc kubenswrapper[4856]: I0126 17:04:30.986122 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-787548c685-7dskp" Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.019583 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5674dc9874-8hwcb"] Jan 26 17:04:31 crc kubenswrapper[4856]: E0126 17:04:31.020080 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f710888e-8c73-4d02-8ab4-f530b2562d8e" containerName="controller-manager" Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.020107 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f710888e-8c73-4d02-8ab4-f530b2562d8e" containerName="controller-manager" Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.020285 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f710888e-8c73-4d02-8ab4-f530b2562d8e" containerName="controller-manager" Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.021012 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5674dc9874-8hwcb" Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.031809 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5674dc9874-8hwcb"] Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.063685 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-787548c685-7dskp" event={"ID":"f710888e-8c73-4d02-8ab4-f530b2562d8e","Type":"ContainerDied","Data":"558fe0d45eb56f6266e1083119bf551e438b00b093d926a293bf75ae0823f5a2"} Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.063773 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-787548c685-7dskp" Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.063827 4856 scope.go:117] "RemoveContainer" containerID="d59eb1b03a2fffa887687d53c3e17cf812845ab76f4c776e709172dc2d904988" Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.121988 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f710888e-8c73-4d02-8ab4-f530b2562d8e-serving-cert\") pod \"f710888e-8c73-4d02-8ab4-f530b2562d8e\" (UID: \"f710888e-8c73-4d02-8ab4-f530b2562d8e\") " Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.122147 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f710888e-8c73-4d02-8ab4-f530b2562d8e-config\") pod \"f710888e-8c73-4d02-8ab4-f530b2562d8e\" (UID: \"f710888e-8c73-4d02-8ab4-f530b2562d8e\") " Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.122724 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f710888e-8c73-4d02-8ab4-f530b2562d8e-proxy-ca-bundles\") pod \"f710888e-8c73-4d02-8ab4-f530b2562d8e\" (UID: \"f710888e-8c73-4d02-8ab4-f530b2562d8e\") " Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.122760 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f710888e-8c73-4d02-8ab4-f530b2562d8e-client-ca\") pod \"f710888e-8c73-4d02-8ab4-f530b2562d8e\" (UID: \"f710888e-8c73-4d02-8ab4-f530b2562d8e\") " Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.122854 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7x9xc\" (UniqueName: \"kubernetes.io/projected/f710888e-8c73-4d02-8ab4-f530b2562d8e-kube-api-access-7x9xc\") pod \"f710888e-8c73-4d02-8ab4-f530b2562d8e\" (UID: \"f710888e-8c73-4d02-8ab4-f530b2562d8e\") " Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.123031 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brs28\" (UniqueName: \"kubernetes.io/projected/6651085c-1bee-4d12-baf3-469422e5d913-kube-api-access-brs28\") pod \"controller-manager-5674dc9874-8hwcb\" (UID: \"6651085c-1bee-4d12-baf3-469422e5d913\") " pod="openshift-controller-manager/controller-manager-5674dc9874-8hwcb" Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.123648 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6651085c-1bee-4d12-baf3-469422e5d913-config\") pod \"controller-manager-5674dc9874-8hwcb\" (UID: \"6651085c-1bee-4d12-baf3-469422e5d913\") " pod="openshift-controller-manager/controller-manager-5674dc9874-8hwcb" Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.123672 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6651085c-1bee-4d12-baf3-469422e5d913-serving-cert\") pod \"controller-manager-5674dc9874-8hwcb\" (UID: \"6651085c-1bee-4d12-baf3-469422e5d913\") " pod="openshift-controller-manager/controller-manager-5674dc9874-8hwcb" Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.123672 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f710888e-8c73-4d02-8ab4-f530b2562d8e-config" (OuterVolumeSpecName: "config") pod "f710888e-8c73-4d02-8ab4-f530b2562d8e" (UID: "f710888e-8c73-4d02-8ab4-f530b2562d8e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.123674 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f710888e-8c73-4d02-8ab4-f530b2562d8e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f710888e-8c73-4d02-8ab4-f530b2562d8e" (UID: "f710888e-8c73-4d02-8ab4-f530b2562d8e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.123865 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6651085c-1bee-4d12-baf3-469422e5d913-client-ca\") pod \"controller-manager-5674dc9874-8hwcb\" (UID: \"6651085c-1bee-4d12-baf3-469422e5d913\") " pod="openshift-controller-manager/controller-manager-5674dc9874-8hwcb" Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.123919 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6651085c-1bee-4d12-baf3-469422e5d913-proxy-ca-bundles\") pod \"controller-manager-5674dc9874-8hwcb\" (UID: \"6651085c-1bee-4d12-baf3-469422e5d913\") " pod="openshift-controller-manager/controller-manager-5674dc9874-8hwcb" Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.123900 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f710888e-8c73-4d02-8ab4-f530b2562d8e-client-ca" (OuterVolumeSpecName: "client-ca") pod "f710888e-8c73-4d02-8ab4-f530b2562d8e" (UID: "f710888e-8c73-4d02-8ab4-f530b2562d8e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.124100 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f710888e-8c73-4d02-8ab4-f530b2562d8e-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.124266 4856 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f710888e-8c73-4d02-8ab4-f530b2562d8e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.128985 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f710888e-8c73-4d02-8ab4-f530b2562d8e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f710888e-8c73-4d02-8ab4-f530b2562d8e" (UID: "f710888e-8c73-4d02-8ab4-f530b2562d8e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.132845 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f710888e-8c73-4d02-8ab4-f530b2562d8e-kube-api-access-7x9xc" (OuterVolumeSpecName: "kube-api-access-7x9xc") pod "f710888e-8c73-4d02-8ab4-f530b2562d8e" (UID: "f710888e-8c73-4d02-8ab4-f530b2562d8e"). InnerVolumeSpecName "kube-api-access-7x9xc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.225580 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6651085c-1bee-4d12-baf3-469422e5d913-config\") pod \"controller-manager-5674dc9874-8hwcb\" (UID: \"6651085c-1bee-4d12-baf3-469422e5d913\") " pod="openshift-controller-manager/controller-manager-5674dc9874-8hwcb" Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.225628 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6651085c-1bee-4d12-baf3-469422e5d913-serving-cert\") pod \"controller-manager-5674dc9874-8hwcb\" (UID: \"6651085c-1bee-4d12-baf3-469422e5d913\") " pod="openshift-controller-manager/controller-manager-5674dc9874-8hwcb" Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.225669 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6651085c-1bee-4d12-baf3-469422e5d913-client-ca\") pod \"controller-manager-5674dc9874-8hwcb\" (UID: \"6651085c-1bee-4d12-baf3-469422e5d913\") " pod="openshift-controller-manager/controller-manager-5674dc9874-8hwcb" Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.225728 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6651085c-1bee-4d12-baf3-469422e5d913-proxy-ca-bundles\") pod \"controller-manager-5674dc9874-8hwcb\" (UID: \"6651085c-1bee-4d12-baf3-469422e5d913\") " pod="openshift-controller-manager/controller-manager-5674dc9874-8hwcb" Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.225759 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brs28\" (UniqueName: \"kubernetes.io/projected/6651085c-1bee-4d12-baf3-469422e5d913-kube-api-access-brs28\") pod \"controller-manager-5674dc9874-8hwcb\" (UID: \"6651085c-1bee-4d12-baf3-469422e5d913\") " pod="openshift-controller-manager/controller-manager-5674dc9874-8hwcb" Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.225832 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7x9xc\" (UniqueName: \"kubernetes.io/projected/f710888e-8c73-4d02-8ab4-f530b2562d8e-kube-api-access-7x9xc\") on node \"crc\" DevicePath \"\"" Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.225849 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f710888e-8c73-4d02-8ab4-f530b2562d8e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.225863 4856 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f710888e-8c73-4d02-8ab4-f530b2562d8e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.227251 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6651085c-1bee-4d12-baf3-469422e5d913-client-ca\") pod \"controller-manager-5674dc9874-8hwcb\" (UID: \"6651085c-1bee-4d12-baf3-469422e5d913\") " pod="openshift-controller-manager/controller-manager-5674dc9874-8hwcb" Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.227249 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6651085c-1bee-4d12-baf3-469422e5d913-proxy-ca-bundles\") pod \"controller-manager-5674dc9874-8hwcb\" (UID: \"6651085c-1bee-4d12-baf3-469422e5d913\") " pod="openshift-controller-manager/controller-manager-5674dc9874-8hwcb" Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.227427 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6651085c-1bee-4d12-baf3-469422e5d913-config\") pod \"controller-manager-5674dc9874-8hwcb\" (UID: \"6651085c-1bee-4d12-baf3-469422e5d913\") " pod="openshift-controller-manager/controller-manager-5674dc9874-8hwcb" Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.232564 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6651085c-1bee-4d12-baf3-469422e5d913-serving-cert\") pod \"controller-manager-5674dc9874-8hwcb\" (UID: \"6651085c-1bee-4d12-baf3-469422e5d913\") " pod="openshift-controller-manager/controller-manager-5674dc9874-8hwcb" Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.241466 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brs28\" (UniqueName: \"kubernetes.io/projected/6651085c-1bee-4d12-baf3-469422e5d913-kube-api-access-brs28\") pod \"controller-manager-5674dc9874-8hwcb\" (UID: \"6651085c-1bee-4d12-baf3-469422e5d913\") " pod="openshift-controller-manager/controller-manager-5674dc9874-8hwcb" Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.345253 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5674dc9874-8hwcb" Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.409372 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-787548c685-7dskp"] Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.414749 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-787548c685-7dskp"] Jan 26 17:04:31 crc kubenswrapper[4856]: I0126 17:04:31.766252 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5674dc9874-8hwcb"] Jan 26 17:04:31 crc kubenswrapper[4856]: W0126 17:04:31.773277 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6651085c_1bee_4d12_baf3_469422e5d913.slice/crio-325d697f9bb3dc3b8b8d0844c09109968aa162bddb4a735c7737f413cf361e04 WatchSource:0}: Error finding container 325d697f9bb3dc3b8b8d0844c09109968aa162bddb4a735c7737f413cf361e04: Status 404 returned error can't find the container with id 325d697f9bb3dc3b8b8d0844c09109968aa162bddb4a735c7737f413cf361e04 Jan 26 17:04:32 crc kubenswrapper[4856]: I0126 17:04:32.075286 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5674dc9874-8hwcb" event={"ID":"6651085c-1bee-4d12-baf3-469422e5d913","Type":"ContainerStarted","Data":"873dac22c2f5e8d3028987af3886aaebbed0d2fee645d66d41add1b55db0fa4c"} Jan 26 17:04:32 crc kubenswrapper[4856]: I0126 17:04:32.075698 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5674dc9874-8hwcb" event={"ID":"6651085c-1bee-4d12-baf3-469422e5d913","Type":"ContainerStarted","Data":"325d697f9bb3dc3b8b8d0844c09109968aa162bddb4a735c7737f413cf361e04"} Jan 26 17:04:32 crc kubenswrapper[4856]: I0126 17:04:32.075723 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5674dc9874-8hwcb" Jan 26 17:04:32 crc kubenswrapper[4856]: I0126 17:04:32.081995 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5674dc9874-8hwcb" Jan 26 17:04:32 crc kubenswrapper[4856]: I0126 17:04:32.099368 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5674dc9874-8hwcb" podStartSLOduration=3.099325577 podStartE2EDuration="3.099325577s" podCreationTimestamp="2026-01-26 17:04:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:04:32.092142273 +0000 UTC m=+368.045396274" watchObservedRunningTime="2026-01-26 17:04:32.099325577 +0000 UTC m=+368.052579558" Jan 26 17:04:33 crc kubenswrapper[4856]: I0126 17:04:33.404459 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f710888e-8c73-4d02-8ab4-f530b2562d8e" path="/var/lib/kubelet/pods/f710888e-8c73-4d02-8ab4-f530b2562d8e/volumes" Jan 26 17:04:56 crc kubenswrapper[4856]: I0126 17:04:56.939687 4856 patch_prober.go:28] interesting pod/machine-config-daemon-xm9cq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:04:56 crc kubenswrapper[4856]: I0126 17:04:56.940311 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:05:25 crc kubenswrapper[4856]: I0126 17:05:25.627051 4856 scope.go:117] "RemoveContainer" containerID="59ba0f131eee14df389391e46bc772894e49e976c3663df5fb3bc98b3cb4a3d6" Jan 26 17:05:25 crc kubenswrapper[4856]: I0126 17:05:25.646487 4856 scope.go:117] "RemoveContainer" containerID="2ffc4b80383322e2c628a33ac37f15fd77c7650ca108c022f97bb48aad023462" Jan 26 17:05:25 crc kubenswrapper[4856]: I0126 17:05:25.665620 4856 scope.go:117] "RemoveContainer" containerID="40d432159a07ba20bf95f058bf8a597d67cbd8d852519bed035694f8ba3d8ec4" Jan 26 17:05:25 crc kubenswrapper[4856]: I0126 17:05:25.696964 4856 scope.go:117] "RemoveContainer" containerID="89975b8f9428f81ab5d3fb48ced5dd9c837bea2feea3b89f5f7ff8d7d5d15b3e" Jan 26 17:05:25 crc kubenswrapper[4856]: I0126 17:05:25.715696 4856 scope.go:117] "RemoveContainer" containerID="b03a01b651d9f66da4dd1f6e0d29ad97c0d6ae46b644c3d997d8dc99476706df" Jan 26 17:05:25 crc kubenswrapper[4856]: I0126 17:05:25.732128 4856 scope.go:117] "RemoveContainer" containerID="f2daa3f13d37b7ae19dfca406b6b5cfdcd4f211287f7d410193f2fee36a24553" Jan 26 17:05:26 crc kubenswrapper[4856]: I0126 17:05:26.939026 4856 patch_prober.go:28] interesting pod/machine-config-daemon-xm9cq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:05:26 crc kubenswrapper[4856]: I0126 17:05:26.941394 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:05:56 crc kubenswrapper[4856]: I0126 17:05:56.939685 4856 patch_prober.go:28] interesting pod/machine-config-daemon-xm9cq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:05:56 crc kubenswrapper[4856]: I0126 17:05:56.940231 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:05:56 crc kubenswrapper[4856]: I0126 17:05:56.940350 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" Jan 26 17:05:56 crc kubenswrapper[4856]: I0126 17:05:56.941255 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9758bfdfd1807e791935ac7ec93246863e5867351e35d27ffaff68ae79110e9c"} pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 17:05:56 crc kubenswrapper[4856]: I0126 17:05:56.941338 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" containerName="machine-config-daemon" containerID="cri-o://9758bfdfd1807e791935ac7ec93246863e5867351e35d27ffaff68ae79110e9c" gracePeriod=600 Jan 26 17:05:57 crc kubenswrapper[4856]: I0126 17:05:57.609018 4856 generic.go:334] "Generic (PLEG): container finished" podID="63c75ede-5170-4db0-811b-5217ef8d72b3" containerID="9758bfdfd1807e791935ac7ec93246863e5867351e35d27ffaff68ae79110e9c" exitCode=0 Jan 26 17:05:57 crc kubenswrapper[4856]: I0126 17:05:57.609088 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" event={"ID":"63c75ede-5170-4db0-811b-5217ef8d72b3","Type":"ContainerDied","Data":"9758bfdfd1807e791935ac7ec93246863e5867351e35d27ffaff68ae79110e9c"} Jan 26 17:05:57 crc kubenswrapper[4856]: I0126 17:05:57.609478 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" event={"ID":"63c75ede-5170-4db0-811b-5217ef8d72b3","Type":"ContainerStarted","Data":"fe42c0299ac9f35a2260caaf7226f7e2161da013442117dab0d25a7c69c46115"} Jan 26 17:05:57 crc kubenswrapper[4856]: I0126 17:05:57.609506 4856 scope.go:117] "RemoveContainer" containerID="54ca3fa13d9e8d442efa93b44a870369f7df3fe7562d77b98528f5c19a751f18" Jan 26 17:07:25 crc kubenswrapper[4856]: I0126 17:07:25.784371 4856 scope.go:117] "RemoveContainer" containerID="f96de8f882682ea8e5a30970c1ce8d34c4b60cb434e13968e3bd6879b62b071b" Jan 26 17:08:26 crc kubenswrapper[4856]: I0126 17:08:26.938550 4856 patch_prober.go:28] interesting pod/machine-config-daemon-xm9cq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:08:26 crc kubenswrapper[4856]: I0126 17:08:26.939789 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:08:56 crc kubenswrapper[4856]: I0126 17:08:56.939167 4856 patch_prober.go:28] interesting pod/machine-config-daemon-xm9cq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:08:56 crc kubenswrapper[4856]: I0126 17:08:56.939502 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:09:17 crc kubenswrapper[4856]: I0126 17:09:17.926299 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pxh94"] Jan 26 17:09:17 crc kubenswrapper[4856]: I0126 17:09:17.929212 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="sbdb" containerID="cri-o://11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a" gracePeriod=30 Jan 26 17:09:17 crc kubenswrapper[4856]: I0126 17:09:17.929247 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3" gracePeriod=30 Jan 26 17:09:17 crc kubenswrapper[4856]: I0126 17:09:17.929211 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="nbdb" containerID="cri-o://4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7" gracePeriod=30 Jan 26 17:09:17 crc kubenswrapper[4856]: I0126 17:09:17.929320 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="northd" containerID="cri-o://b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde" gracePeriod=30 Jan 26 17:09:17 crc kubenswrapper[4856]: I0126 17:09:17.929418 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="ovn-acl-logging" containerID="cri-o://83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc" gracePeriod=30 Jan 26 17:09:17 crc kubenswrapper[4856]: I0126 17:09:17.929412 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="kube-rbac-proxy-node" containerID="cri-o://e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1" gracePeriod=30 Jan 26 17:09:17 crc kubenswrapper[4856]: I0126 17:09:17.929130 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="ovn-controller" containerID="cri-o://25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b" gracePeriod=30 Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.000696 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="ovnkube-controller" containerID="cri-o://984517fa62d3e73cf4b33b7c4a101f8221c940f66938120373b7d41cac9c5e5c" gracePeriod=30 Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.228596 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pxh94_ab5b6f50-172b-4535-a0f9-5d103bcab4e7/ovnkube-controller/3.log" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.232373 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pxh94_ab5b6f50-172b-4535-a0f9-5d103bcab4e7/ovn-acl-logging/0.log" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.233280 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pxh94_ab5b6f50-172b-4535-a0f9-5d103bcab4e7/ovn-controller/0.log" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.235919 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.298877 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6t25x"] Jan 26 17:09:18 crc kubenswrapper[4856]: E0126 17:09:18.299204 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="ovnkube-controller" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.299225 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="ovnkube-controller" Jan 26 17:09:18 crc kubenswrapper[4856]: E0126 17:09:18.299234 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="ovnkube-controller" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.299239 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="ovnkube-controller" Jan 26 17:09:18 crc kubenswrapper[4856]: E0126 17:09:18.299247 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="kube-rbac-proxy-node" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.299252 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="kube-rbac-proxy-node" Jan 26 17:09:18 crc kubenswrapper[4856]: E0126 17:09:18.299261 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="sbdb" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.299266 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="sbdb" Jan 26 17:09:18 crc kubenswrapper[4856]: E0126 17:09:18.299279 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="ovn-acl-logging" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.299285 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="ovn-acl-logging" Jan 26 17:09:18 crc kubenswrapper[4856]: E0126 17:09:18.299293 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="northd" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.299299 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="northd" Jan 26 17:09:18 crc kubenswrapper[4856]: E0126 17:09:18.299308 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="ovn-controller" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.299314 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="ovn-controller" Jan 26 17:09:18 crc kubenswrapper[4856]: E0126 17:09:18.299325 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="nbdb" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.299330 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="nbdb" Jan 26 17:09:18 crc kubenswrapper[4856]: E0126 17:09:18.299339 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="kubecfg-setup" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.299344 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="kubecfg-setup" Jan 26 17:09:18 crc kubenswrapper[4856]: E0126 17:09:18.299352 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="ovnkube-controller" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.299357 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="ovnkube-controller" Jan 26 17:09:18 crc kubenswrapper[4856]: E0126 17:09:18.299366 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.299372 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.299491 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="ovn-controller" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.299499 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="ovnkube-controller" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.299511 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="sbdb" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.299517 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="ovnkube-controller" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.299526 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="ovnkube-controller" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.299551 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="kube-rbac-proxy-node" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.299560 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="ovn-acl-logging" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.299568 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.299578 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="nbdb" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.299585 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="northd" Jan 26 17:09:18 crc kubenswrapper[4856]: E0126 17:09:18.299670 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="ovnkube-controller" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.299685 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="ovnkube-controller" Jan 26 17:09:18 crc kubenswrapper[4856]: E0126 17:09:18.299692 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="ovnkube-controller" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.299698 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="ovnkube-controller" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.299787 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="ovnkube-controller" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.299795 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerName="ovnkube-controller" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.301448 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.422913 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-run-ovn\") pod \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.422978 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-run-netns\") pod \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.423039 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-cni-netd\") pod \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.423083 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "ab5b6f50-172b-4535-a0f9-5d103bcab4e7" (UID: "ab5b6f50-172b-4535-a0f9-5d103bcab4e7"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.423098 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "ab5b6f50-172b-4535-a0f9-5d103bcab4e7" (UID: "ab5b6f50-172b-4535-a0f9-5d103bcab4e7"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.423118 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "ab5b6f50-172b-4535-a0f9-5d103bcab4e7" (UID: "ab5b6f50-172b-4535-a0f9-5d103bcab4e7"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.423770 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-cni-bin\") pod \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.423839 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-ovn-node-metrics-cert\") pod \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.423871 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-systemd-units\") pod \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.423918 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-kubelet\") pod \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.423944 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-etc-openvswitch\") pod \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.423971 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-run-openvswitch\") pod \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.424015 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-run-systemd\") pod \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.424051 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-run-ovn-kubernetes\") pod \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.424086 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-env-overrides\") pod \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.424114 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-node-log\") pod \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.424147 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-ovnkube-config\") pod \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.424174 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.424217 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-ovnkube-script-lib\") pod \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.424249 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-var-lib-openvswitch\") pod \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.424280 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kdbz\" (UniqueName: \"kubernetes.io/projected/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-kube-api-access-9kdbz\") pod \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.424311 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-log-socket\") pod \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.424338 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-slash\") pod \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\" (UID: \"ab5b6f50-172b-4535-a0f9-5d103bcab4e7\") " Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.424010 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "ab5b6f50-172b-4535-a0f9-5d103bcab4e7" (UID: "ab5b6f50-172b-4535-a0f9-5d103bcab4e7"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.425630 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-node-log" (OuterVolumeSpecName: "node-log") pod "ab5b6f50-172b-4535-a0f9-5d103bcab4e7" (UID: "ab5b6f50-172b-4535-a0f9-5d103bcab4e7"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.424034 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "ab5b6f50-172b-4535-a0f9-5d103bcab4e7" (UID: "ab5b6f50-172b-4535-a0f9-5d103bcab4e7"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.425277 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "ab5b6f50-172b-4535-a0f9-5d103bcab4e7" (UID: "ab5b6f50-172b-4535-a0f9-5d103bcab4e7"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.425340 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-log-socket" (OuterVolumeSpecName: "log-socket") pod "ab5b6f50-172b-4535-a0f9-5d103bcab4e7" (UID: "ab5b6f50-172b-4535-a0f9-5d103bcab4e7"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.425386 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "ab5b6f50-172b-4535-a0f9-5d103bcab4e7" (UID: "ab5b6f50-172b-4535-a0f9-5d103bcab4e7"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.425685 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "ab5b6f50-172b-4535-a0f9-5d103bcab4e7" (UID: "ab5b6f50-172b-4535-a0f9-5d103bcab4e7"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.425393 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "ab5b6f50-172b-4535-a0f9-5d103bcab4e7" (UID: "ab5b6f50-172b-4535-a0f9-5d103bcab4e7"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.425415 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "ab5b6f50-172b-4535-a0f9-5d103bcab4e7" (UID: "ab5b6f50-172b-4535-a0f9-5d103bcab4e7"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.425601 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "ab5b6f50-172b-4535-a0f9-5d103bcab4e7" (UID: "ab5b6f50-172b-4535-a0f9-5d103bcab4e7"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.425634 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "ab5b6f50-172b-4535-a0f9-5d103bcab4e7" (UID: "ab5b6f50-172b-4535-a0f9-5d103bcab4e7"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.425772 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "ab5b6f50-172b-4535-a0f9-5d103bcab4e7" (UID: "ab5b6f50-172b-4535-a0f9-5d103bcab4e7"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.425920 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "ab5b6f50-172b-4535-a0f9-5d103bcab4e7" (UID: "ab5b6f50-172b-4535-a0f9-5d103bcab4e7"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.427089 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d70f72f7-ff5e-4906-8622-9cddfe769d55-ovnkube-config\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.427231 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d70f72f7-ff5e-4906-8622-9cddfe769d55-env-overrides\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.427279 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-host-kubelet\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.427310 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-host-run-ovn-kubernetes\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.427353 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnrjd\" (UniqueName: \"kubernetes.io/projected/d70f72f7-ff5e-4906-8622-9cddfe769d55-kube-api-access-cnrjd\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.427378 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-host-run-netns\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.427405 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d70f72f7-ff5e-4906-8622-9cddfe769d55-ovnkube-script-lib\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.427435 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-var-lib-openvswitch\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.427470 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-etc-openvswitch\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.427756 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-run-openvswitch\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.427796 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-run-ovn\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.427857 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d70f72f7-ff5e-4906-8622-9cddfe769d55-ovn-node-metrics-cert\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.427900 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-run-systemd\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.427929 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-log-socket\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.427956 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-host-cni-bin\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.427995 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-node-log\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.428132 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-host-cni-netd\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.428173 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.428561 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-host-slash\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.428674 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-systemd-units\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.428973 4856 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.428998 4856 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.429017 4856 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-log-socket\") on node \"crc\" DevicePath \"\"" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.429033 4856 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.429051 4856 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.429063 4856 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.429075 4856 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.429088 4856 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.429137 4856 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.429151 4856 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.429165 4856 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.429183 4856 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.429199 4856 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.429211 4856 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-node-log\") on node \"crc\" DevicePath \"\"" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.429223 4856 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.429240 4856 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.429974 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "ab5b6f50-172b-4535-a0f9-5d103bcab4e7" (UID: "ab5b6f50-172b-4535-a0f9-5d103bcab4e7"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.430326 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-kube-api-access-9kdbz" (OuterVolumeSpecName: "kube-api-access-9kdbz") pod "ab5b6f50-172b-4535-a0f9-5d103bcab4e7" (UID: "ab5b6f50-172b-4535-a0f9-5d103bcab4e7"). InnerVolumeSpecName "kube-api-access-9kdbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.438070 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-slash" (OuterVolumeSpecName: "host-slash") pod "ab5b6f50-172b-4535-a0f9-5d103bcab4e7" (UID: "ab5b6f50-172b-4535-a0f9-5d103bcab4e7"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.440137 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "ab5b6f50-172b-4535-a0f9-5d103bcab4e7" (UID: "ab5b6f50-172b-4535-a0f9-5d103bcab4e7"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.530008 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-node-log\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.530083 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.530113 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-host-cni-netd\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.530141 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-host-slash\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.530162 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-systemd-units\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.530194 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d70f72f7-ff5e-4906-8622-9cddfe769d55-ovnkube-config\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.530185 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-node-log\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.530217 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d70f72f7-ff5e-4906-8622-9cddfe769d55-env-overrides\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.530236 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-host-cni-netd\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.530274 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-host-kubelet\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.530277 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-systemd-units\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.530245 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-host-kubelet\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.530290 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.530351 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-host-run-ovn-kubernetes\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.530385 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnrjd\" (UniqueName: \"kubernetes.io/projected/d70f72f7-ff5e-4906-8622-9cddfe769d55-kube-api-access-cnrjd\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.530406 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d70f72f7-ff5e-4906-8622-9cddfe769d55-ovnkube-script-lib\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.530434 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-host-run-netns\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.530455 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-var-lib-openvswitch\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.530459 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-host-run-ovn-kubernetes\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.530493 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-etc-openvswitch\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.530516 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-run-openvswitch\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.530518 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-host-run-netns\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.530576 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-run-ovn\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.530647 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d70f72f7-ff5e-4906-8622-9cddfe769d55-ovn-node-metrics-cert\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.530683 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-run-systemd\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.530423 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-host-slash\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.530734 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-etc-openvswitch\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.530780 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-run-openvswitch\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.530797 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-run-ovn\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.530810 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-var-lib-openvswitch\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.530837 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-log-socket\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.530847 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-run-systemd\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.530870 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-host-cni-bin\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.530869 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-log-socket\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.530922 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d70f72f7-ff5e-4906-8622-9cddfe769d55-env-overrides\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.530983 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d70f72f7-ff5e-4906-8622-9cddfe769d55-host-cni-bin\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.531042 4856 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.531063 4856 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.531075 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kdbz\" (UniqueName: \"kubernetes.io/projected/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-kube-api-access-9kdbz\") on node \"crc\" DevicePath \"\"" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.531084 4856 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ab5b6f50-172b-4535-a0f9-5d103bcab4e7-host-slash\") on node \"crc\" DevicePath \"\"" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.531422 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d70f72f7-ff5e-4906-8622-9cddfe769d55-ovnkube-config\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.532294 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d70f72f7-ff5e-4906-8622-9cddfe769d55-ovnkube-script-lib\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.533872 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d70f72f7-ff5e-4906-8622-9cddfe769d55-ovn-node-metrics-cert\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.549066 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnrjd\" (UniqueName: \"kubernetes.io/projected/d70f72f7-ff5e-4906-8622-9cddfe769d55-kube-api-access-cnrjd\") pod \"ovnkube-node-6t25x\" (UID: \"d70f72f7-ff5e-4906-8622-9cddfe769d55\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.617865 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.968583 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pxh94_ab5b6f50-172b-4535-a0f9-5d103bcab4e7/ovnkube-controller/3.log" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.971729 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pxh94_ab5b6f50-172b-4535-a0f9-5d103bcab4e7/ovn-acl-logging/0.log" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.972363 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pxh94_ab5b6f50-172b-4535-a0f9-5d103bcab4e7/ovn-controller/0.log" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.972890 4856 generic.go:334] "Generic (PLEG): container finished" podID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerID="984517fa62d3e73cf4b33b7c4a101f8221c940f66938120373b7d41cac9c5e5c" exitCode=0 Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.972917 4856 generic.go:334] "Generic (PLEG): container finished" podID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerID="11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a" exitCode=0 Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.972925 4856 generic.go:334] "Generic (PLEG): container finished" podID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerID="4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7" exitCode=0 Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.972948 4856 generic.go:334] "Generic (PLEG): container finished" podID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerID="b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde" exitCode=0 Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.972955 4856 generic.go:334] "Generic (PLEG): container finished" podID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerID="7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3" exitCode=0 Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.972962 4856 generic.go:334] "Generic (PLEG): container finished" podID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerID="e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1" exitCode=0 Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.972969 4856 generic.go:334] "Generic (PLEG): container finished" podID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerID="83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc" exitCode=143 Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.972975 4856 generic.go:334] "Generic (PLEG): container finished" podID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" containerID="25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b" exitCode=143 Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.972973 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" event={"ID":"ab5b6f50-172b-4535-a0f9-5d103bcab4e7","Type":"ContainerDied","Data":"984517fa62d3e73cf4b33b7c4a101f8221c940f66938120373b7d41cac9c5e5c"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973005 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973036 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" event={"ID":"ab5b6f50-172b-4535-a0f9-5d103bcab4e7","Type":"ContainerDied","Data":"11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973051 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" event={"ID":"ab5b6f50-172b-4535-a0f9-5d103bcab4e7","Type":"ContainerDied","Data":"4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973062 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" event={"ID":"ab5b6f50-172b-4535-a0f9-5d103bcab4e7","Type":"ContainerDied","Data":"b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973073 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" event={"ID":"ab5b6f50-172b-4535-a0f9-5d103bcab4e7","Type":"ContainerDied","Data":"7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973082 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" event={"ID":"ab5b6f50-172b-4535-a0f9-5d103bcab4e7","Type":"ContainerDied","Data":"e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973103 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973118 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973124 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973126 4856 scope.go:117] "RemoveContainer" containerID="984517fa62d3e73cf4b33b7c4a101f8221c940f66938120373b7d41cac9c5e5c" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973130 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973214 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973226 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973233 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973238 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973244 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973277 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" event={"ID":"ab5b6f50-172b-4535-a0f9-5d103bcab4e7","Type":"ContainerDied","Data":"83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973294 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"984517fa62d3e73cf4b33b7c4a101f8221c940f66938120373b7d41cac9c5e5c"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973300 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973305 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973311 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973316 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973321 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973342 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973347 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973353 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973359 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973368 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" event={"ID":"ab5b6f50-172b-4535-a0f9-5d103bcab4e7","Type":"ContainerDied","Data":"25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973378 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"984517fa62d3e73cf4b33b7c4a101f8221c940f66938120373b7d41cac9c5e5c"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973386 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973393 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973399 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973420 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973426 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973431 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973436 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973441 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973446 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973453 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pxh94" event={"ID":"ab5b6f50-172b-4535-a0f9-5d103bcab4e7","Type":"ContainerDied","Data":"a1b2fe845f0957cc37219c78a754b5c2b9acc25bf2ef8f7083ca734c4c5c68b9"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973460 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"984517fa62d3e73cf4b33b7c4a101f8221c940f66938120373b7d41cac9c5e5c"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973466 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973472 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973477 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973496 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973502 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973507 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973512 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973517 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.973546 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.975905 4856 generic.go:334] "Generic (PLEG): container finished" podID="d70f72f7-ff5e-4906-8622-9cddfe769d55" containerID="77c4fd70275b917cb0f73727113a417c575c523f2153a6e00e93ec081a5c0141" exitCode=0 Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.975927 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" event={"ID":"d70f72f7-ff5e-4906-8622-9cddfe769d55","Type":"ContainerDied","Data":"77c4fd70275b917cb0f73727113a417c575c523f2153a6e00e93ec081a5c0141"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.975948 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" event={"ID":"d70f72f7-ff5e-4906-8622-9cddfe769d55","Type":"ContainerStarted","Data":"44ee9c730d2fa91ea1f9b547ba330dbc15901ea95b7027620fc8366ea50bb691"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.979646 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rq622_7a742e7b-c420-46e3-9e96-e9c744af6124/kube-multus/2.log" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.980580 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rq622_7a742e7b-c420-46e3-9e96-e9c744af6124/kube-multus/1.log" Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.980622 4856 generic.go:334] "Generic (PLEG): container finished" podID="7a742e7b-c420-46e3-9e96-e9c744af6124" containerID="ddec0dbea657c6160cfdfd78886d5ae335dab8b667b0e0e3813dffa86a2ae2dc" exitCode=2 Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.980654 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rq622" event={"ID":"7a742e7b-c420-46e3-9e96-e9c744af6124","Type":"ContainerDied","Data":"ddec0dbea657c6160cfdfd78886d5ae335dab8b667b0e0e3813dffa86a2ae2dc"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.980676 4856 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"afeb20035224feeab28a92ac77b43a24e653e49c56a25590a9861019a2b7a8ff"} Jan 26 17:09:18 crc kubenswrapper[4856]: I0126 17:09:18.981090 4856 scope.go:117] "RemoveContainer" containerID="ddec0dbea657c6160cfdfd78886d5ae335dab8b667b0e0e3813dffa86a2ae2dc" Jan 26 17:09:18 crc kubenswrapper[4856]: E0126 17:09:18.981343 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-rq622_openshift-multus(7a742e7b-c420-46e3-9e96-e9c744af6124)\"" pod="openshift-multus/multus-rq622" podUID="7a742e7b-c420-46e3-9e96-e9c744af6124" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.002550 4856 scope.go:117] "RemoveContainer" containerID="203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.025899 4856 scope.go:117] "RemoveContainer" containerID="11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.046483 4856 scope.go:117] "RemoveContainer" containerID="4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.046628 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pxh94"] Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.051179 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pxh94"] Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.087009 4856 scope.go:117] "RemoveContainer" containerID="b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.099405 4856 scope.go:117] "RemoveContainer" containerID="7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.113323 4856 scope.go:117] "RemoveContainer" containerID="e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.126899 4856 scope.go:117] "RemoveContainer" containerID="83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.148402 4856 scope.go:117] "RemoveContainer" containerID="25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.167511 4856 scope.go:117] "RemoveContainer" containerID="d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.180249 4856 scope.go:117] "RemoveContainer" containerID="984517fa62d3e73cf4b33b7c4a101f8221c940f66938120373b7d41cac9c5e5c" Jan 26 17:09:19 crc kubenswrapper[4856]: E0126 17:09:19.180816 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"984517fa62d3e73cf4b33b7c4a101f8221c940f66938120373b7d41cac9c5e5c\": container with ID starting with 984517fa62d3e73cf4b33b7c4a101f8221c940f66938120373b7d41cac9c5e5c not found: ID does not exist" containerID="984517fa62d3e73cf4b33b7c4a101f8221c940f66938120373b7d41cac9c5e5c" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.180854 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"984517fa62d3e73cf4b33b7c4a101f8221c940f66938120373b7d41cac9c5e5c"} err="failed to get container status \"984517fa62d3e73cf4b33b7c4a101f8221c940f66938120373b7d41cac9c5e5c\": rpc error: code = NotFound desc = could not find container \"984517fa62d3e73cf4b33b7c4a101f8221c940f66938120373b7d41cac9c5e5c\": container with ID starting with 984517fa62d3e73cf4b33b7c4a101f8221c940f66938120373b7d41cac9c5e5c not found: ID does not exist" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.180881 4856 scope.go:117] "RemoveContainer" containerID="203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6" Jan 26 17:09:19 crc kubenswrapper[4856]: E0126 17:09:19.181292 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6\": container with ID starting with 203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6 not found: ID does not exist" containerID="203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.181316 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6"} err="failed to get container status \"203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6\": rpc error: code = NotFound desc = could not find container \"203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6\": container with ID starting with 203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6 not found: ID does not exist" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.181336 4856 scope.go:117] "RemoveContainer" containerID="11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a" Jan 26 17:09:19 crc kubenswrapper[4856]: E0126 17:09:19.181631 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a\": container with ID starting with 11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a not found: ID does not exist" containerID="11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.181725 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a"} err="failed to get container status \"11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a\": rpc error: code = NotFound desc = could not find container \"11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a\": container with ID starting with 11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a not found: ID does not exist" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.181790 4856 scope.go:117] "RemoveContainer" containerID="4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7" Jan 26 17:09:19 crc kubenswrapper[4856]: E0126 17:09:19.182202 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7\": container with ID starting with 4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7 not found: ID does not exist" containerID="4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.182262 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7"} err="failed to get container status \"4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7\": rpc error: code = NotFound desc = could not find container \"4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7\": container with ID starting with 4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7 not found: ID does not exist" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.182303 4856 scope.go:117] "RemoveContainer" containerID="b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde" Jan 26 17:09:19 crc kubenswrapper[4856]: E0126 17:09:19.182726 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde\": container with ID starting with b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde not found: ID does not exist" containerID="b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.182807 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde"} err="failed to get container status \"b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde\": rpc error: code = NotFound desc = could not find container \"b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde\": container with ID starting with b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde not found: ID does not exist" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.182888 4856 scope.go:117] "RemoveContainer" containerID="7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3" Jan 26 17:09:19 crc kubenswrapper[4856]: E0126 17:09:19.183264 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3\": container with ID starting with 7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3 not found: ID does not exist" containerID="7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.183360 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3"} err="failed to get container status \"7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3\": rpc error: code = NotFound desc = could not find container \"7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3\": container with ID starting with 7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3 not found: ID does not exist" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.183425 4856 scope.go:117] "RemoveContainer" containerID="e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1" Jan 26 17:09:19 crc kubenswrapper[4856]: E0126 17:09:19.183854 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1\": container with ID starting with e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1 not found: ID does not exist" containerID="e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.183964 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1"} err="failed to get container status \"e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1\": rpc error: code = NotFound desc = could not find container \"e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1\": container with ID starting with e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1 not found: ID does not exist" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.184041 4856 scope.go:117] "RemoveContainer" containerID="83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc" Jan 26 17:09:19 crc kubenswrapper[4856]: E0126 17:09:19.184417 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc\": container with ID starting with 83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc not found: ID does not exist" containerID="83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.184440 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc"} err="failed to get container status \"83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc\": rpc error: code = NotFound desc = could not find container \"83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc\": container with ID starting with 83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc not found: ID does not exist" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.184457 4856 scope.go:117] "RemoveContainer" containerID="25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b" Jan 26 17:09:19 crc kubenswrapper[4856]: E0126 17:09:19.184722 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b\": container with ID starting with 25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b not found: ID does not exist" containerID="25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.184748 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b"} err="failed to get container status \"25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b\": rpc error: code = NotFound desc = could not find container \"25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b\": container with ID starting with 25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b not found: ID does not exist" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.184778 4856 scope.go:117] "RemoveContainer" containerID="d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8" Jan 26 17:09:19 crc kubenswrapper[4856]: E0126 17:09:19.185019 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\": container with ID starting with d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8 not found: ID does not exist" containerID="d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.185161 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8"} err="failed to get container status \"d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\": rpc error: code = NotFound desc = could not find container \"d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\": container with ID starting with d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8 not found: ID does not exist" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.185263 4856 scope.go:117] "RemoveContainer" containerID="984517fa62d3e73cf4b33b7c4a101f8221c940f66938120373b7d41cac9c5e5c" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.185694 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"984517fa62d3e73cf4b33b7c4a101f8221c940f66938120373b7d41cac9c5e5c"} err="failed to get container status \"984517fa62d3e73cf4b33b7c4a101f8221c940f66938120373b7d41cac9c5e5c\": rpc error: code = NotFound desc = could not find container \"984517fa62d3e73cf4b33b7c4a101f8221c940f66938120373b7d41cac9c5e5c\": container with ID starting with 984517fa62d3e73cf4b33b7c4a101f8221c940f66938120373b7d41cac9c5e5c not found: ID does not exist" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.185737 4856 scope.go:117] "RemoveContainer" containerID="203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.186012 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6"} err="failed to get container status \"203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6\": rpc error: code = NotFound desc = could not find container \"203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6\": container with ID starting with 203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6 not found: ID does not exist" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.186154 4856 scope.go:117] "RemoveContainer" containerID="11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.186569 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a"} err="failed to get container status \"11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a\": rpc error: code = NotFound desc = could not find container \"11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a\": container with ID starting with 11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a not found: ID does not exist" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.186590 4856 scope.go:117] "RemoveContainer" containerID="4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.187113 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7"} err="failed to get container status \"4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7\": rpc error: code = NotFound desc = could not find container \"4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7\": container with ID starting with 4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7 not found: ID does not exist" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.187228 4856 scope.go:117] "RemoveContainer" containerID="b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.187715 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde"} err="failed to get container status \"b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde\": rpc error: code = NotFound desc = could not find container \"b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde\": container with ID starting with b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde not found: ID does not exist" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.187809 4856 scope.go:117] "RemoveContainer" containerID="7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.188135 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3"} err="failed to get container status \"7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3\": rpc error: code = NotFound desc = could not find container \"7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3\": container with ID starting with 7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3 not found: ID does not exist" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.188168 4856 scope.go:117] "RemoveContainer" containerID="e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.188409 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1"} err="failed to get container status \"e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1\": rpc error: code = NotFound desc = could not find container \"e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1\": container with ID starting with e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1 not found: ID does not exist" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.188488 4856 scope.go:117] "RemoveContainer" containerID="83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.188823 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc"} err="failed to get container status \"83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc\": rpc error: code = NotFound desc = could not find container \"83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc\": container with ID starting with 83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc not found: ID does not exist" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.188850 4856 scope.go:117] "RemoveContainer" containerID="25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.189138 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b"} err="failed to get container status \"25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b\": rpc error: code = NotFound desc = could not find container \"25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b\": container with ID starting with 25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b not found: ID does not exist" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.189240 4856 scope.go:117] "RemoveContainer" containerID="d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.189542 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8"} err="failed to get container status \"d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\": rpc error: code = NotFound desc = could not find container \"d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\": container with ID starting with d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8 not found: ID does not exist" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.189564 4856 scope.go:117] "RemoveContainer" containerID="984517fa62d3e73cf4b33b7c4a101f8221c940f66938120373b7d41cac9c5e5c" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.189954 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"984517fa62d3e73cf4b33b7c4a101f8221c940f66938120373b7d41cac9c5e5c"} err="failed to get container status \"984517fa62d3e73cf4b33b7c4a101f8221c940f66938120373b7d41cac9c5e5c\": rpc error: code = NotFound desc = could not find container \"984517fa62d3e73cf4b33b7c4a101f8221c940f66938120373b7d41cac9c5e5c\": container with ID starting with 984517fa62d3e73cf4b33b7c4a101f8221c940f66938120373b7d41cac9c5e5c not found: ID does not exist" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.189984 4856 scope.go:117] "RemoveContainer" containerID="203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.190294 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6"} err="failed to get container status \"203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6\": rpc error: code = NotFound desc = could not find container \"203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6\": container with ID starting with 203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6 not found: ID does not exist" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.190322 4856 scope.go:117] "RemoveContainer" containerID="11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.190624 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a"} err="failed to get container status \"11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a\": rpc error: code = NotFound desc = could not find container \"11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a\": container with ID starting with 11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a not found: ID does not exist" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.190655 4856 scope.go:117] "RemoveContainer" containerID="4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.190885 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7"} err="failed to get container status \"4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7\": rpc error: code = NotFound desc = could not find container \"4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7\": container with ID starting with 4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7 not found: ID does not exist" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.190925 4856 scope.go:117] "RemoveContainer" containerID="b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.191602 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde"} err="failed to get container status \"b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde\": rpc error: code = NotFound desc = could not find container \"b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde\": container with ID starting with b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde not found: ID does not exist" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.191624 4856 scope.go:117] "RemoveContainer" containerID="7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.193115 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3"} err="failed to get container status \"7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3\": rpc error: code = NotFound desc = could not find container \"7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3\": container with ID starting with 7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3 not found: ID does not exist" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.193263 4856 scope.go:117] "RemoveContainer" containerID="e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.194231 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1"} err="failed to get container status \"e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1\": rpc error: code = NotFound desc = could not find container \"e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1\": container with ID starting with e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1 not found: ID does not exist" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.194257 4856 scope.go:117] "RemoveContainer" containerID="83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.195225 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc"} err="failed to get container status \"83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc\": rpc error: code = NotFound desc = could not find container \"83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc\": container with ID starting with 83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc not found: ID does not exist" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.195257 4856 scope.go:117] "RemoveContainer" containerID="25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.195709 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b"} err="failed to get container status \"25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b\": rpc error: code = NotFound desc = could not find container \"25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b\": container with ID starting with 25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b not found: ID does not exist" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.195732 4856 scope.go:117] "RemoveContainer" containerID="d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.196201 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8"} err="failed to get container status \"d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\": rpc error: code = NotFound desc = could not find container \"d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\": container with ID starting with d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8 not found: ID does not exist" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.196229 4856 scope.go:117] "RemoveContainer" containerID="984517fa62d3e73cf4b33b7c4a101f8221c940f66938120373b7d41cac9c5e5c" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.199011 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"984517fa62d3e73cf4b33b7c4a101f8221c940f66938120373b7d41cac9c5e5c"} err="failed to get container status \"984517fa62d3e73cf4b33b7c4a101f8221c940f66938120373b7d41cac9c5e5c\": rpc error: code = NotFound desc = could not find container \"984517fa62d3e73cf4b33b7c4a101f8221c940f66938120373b7d41cac9c5e5c\": container with ID starting with 984517fa62d3e73cf4b33b7c4a101f8221c940f66938120373b7d41cac9c5e5c not found: ID does not exist" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.199193 4856 scope.go:117] "RemoveContainer" containerID="203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.199819 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6"} err="failed to get container status \"203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6\": rpc error: code = NotFound desc = could not find container \"203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6\": container with ID starting with 203d756903498bed1c57a3e87a95f4b24f808514567a99505ba2f3cfec468cb6 not found: ID does not exist" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.199915 4856 scope.go:117] "RemoveContainer" containerID="11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.200306 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a"} err="failed to get container status \"11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a\": rpc error: code = NotFound desc = could not find container \"11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a\": container with ID starting with 11205cc9ebaa35eb58159e387e540fdb6fa8a75b628b5f1b1e79e640665ced4a not found: ID does not exist" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.200331 4856 scope.go:117] "RemoveContainer" containerID="4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.200594 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7"} err="failed to get container status \"4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7\": rpc error: code = NotFound desc = could not find container \"4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7\": container with ID starting with 4a1a575c02f04d857f279139bf41de49cbf2ad326a9e63be6757b7fe72dd26d7 not found: ID does not exist" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.200702 4856 scope.go:117] "RemoveContainer" containerID="b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.201098 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde"} err="failed to get container status \"b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde\": rpc error: code = NotFound desc = could not find container \"b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde\": container with ID starting with b1509fb370f6f5002154d55db3aa12e20b4aaef2552faccc0bc6e22378a28fde not found: ID does not exist" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.201121 4856 scope.go:117] "RemoveContainer" containerID="7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.201486 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3"} err="failed to get container status \"7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3\": rpc error: code = NotFound desc = could not find container \"7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3\": container with ID starting with 7f3ce6d59efe2830eff50e1ff6deb9464d70926fa6b937c4fa325b2f6c82cca3 not found: ID does not exist" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.201598 4856 scope.go:117] "RemoveContainer" containerID="e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.201931 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1"} err="failed to get container status \"e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1\": rpc error: code = NotFound desc = could not find container \"e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1\": container with ID starting with e9f7a132d7600cfb27a4fa2a46f8d3469faa1f4f3792f99884ca456fa4aa71b1 not found: ID does not exist" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.202027 4856 scope.go:117] "RemoveContainer" containerID="83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.202506 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc"} err="failed to get container status \"83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc\": rpc error: code = NotFound desc = could not find container \"83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc\": container with ID starting with 83a08d4efc6e12956b4420eeec79e50e426f29b90bcf50edadceddc1718d88fc not found: ID does not exist" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.202647 4856 scope.go:117] "RemoveContainer" containerID="25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.203110 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b"} err="failed to get container status \"25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b\": rpc error: code = NotFound desc = could not find container \"25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b\": container with ID starting with 25056a87b6686fa3172581a9c5a889f286abe72ac2acfe81463e072deb9e850b not found: ID does not exist" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.203200 4856 scope.go:117] "RemoveContainer" containerID="d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.203566 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8"} err="failed to get container status \"d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\": rpc error: code = NotFound desc = could not find container \"d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8\": container with ID starting with d67317281647e2a2cf1b88f162a6c3cc224c243ff7a13f9706f416ca0e45dee8 not found: ID does not exist" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.203598 4856 scope.go:117] "RemoveContainer" containerID="984517fa62d3e73cf4b33b7c4a101f8221c940f66938120373b7d41cac9c5e5c" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.203856 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"984517fa62d3e73cf4b33b7c4a101f8221c940f66938120373b7d41cac9c5e5c"} err="failed to get container status \"984517fa62d3e73cf4b33b7c4a101f8221c940f66938120373b7d41cac9c5e5c\": rpc error: code = NotFound desc = could not find container \"984517fa62d3e73cf4b33b7c4a101f8221c940f66938120373b7d41cac9c5e5c\": container with ID starting with 984517fa62d3e73cf4b33b7c4a101f8221c940f66938120373b7d41cac9c5e5c not found: ID does not exist" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.403469 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab5b6f50-172b-4535-a0f9-5d103bcab4e7" path="/var/lib/kubelet/pods/ab5b6f50-172b-4535-a0f9-5d103bcab4e7/volumes" Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.990968 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" event={"ID":"d70f72f7-ff5e-4906-8622-9cddfe769d55","Type":"ContainerStarted","Data":"91ef65730af9e2bdd7621585a901cb68f84170da5efdf43be4173574d0ab23e3"} Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.991316 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" event={"ID":"d70f72f7-ff5e-4906-8622-9cddfe769d55","Type":"ContainerStarted","Data":"0c74ebe4a236a613e6e7856badc6e52a30604cfa5668c23fb59ca1b7570a3c8a"} Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.991333 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" event={"ID":"d70f72f7-ff5e-4906-8622-9cddfe769d55","Type":"ContainerStarted","Data":"fa2f5e2ecee3ae541cd01ea593fb352363c772d981279fd1b557036b071363a0"} Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.991342 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" event={"ID":"d70f72f7-ff5e-4906-8622-9cddfe769d55","Type":"ContainerStarted","Data":"29faba44ad25215ae335927fbfb28db292d80b8727d2b69e64f52a531eba697a"} Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.991351 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" event={"ID":"d70f72f7-ff5e-4906-8622-9cddfe769d55","Type":"ContainerStarted","Data":"37fc5e527ffa138328aa0bc335a3685420067c147999a07f5198c6168d40b84c"} Jan 26 17:09:19 crc kubenswrapper[4856]: I0126 17:09:19.991360 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" event={"ID":"d70f72f7-ff5e-4906-8622-9cddfe769d55","Type":"ContainerStarted","Data":"3e07b4c3b62de15e63aaa9d185f89327d5da32a85c68e5c16a0217ac747b5fcc"} Jan 26 17:09:22 crc kubenswrapper[4856]: I0126 17:09:22.008599 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" event={"ID":"d70f72f7-ff5e-4906-8622-9cddfe769d55","Type":"ContainerStarted","Data":"3c8af924f05f38bec8960462460d29ef0d36c059c00c95719be5ab41276cb331"} Jan 26 17:09:25 crc kubenswrapper[4856]: I0126 17:09:25.030291 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" event={"ID":"d70f72f7-ff5e-4906-8622-9cddfe769d55","Type":"ContainerStarted","Data":"b34423efb2d1e07043ed5f4c4bb31cedabbbaf0d8733a1651fbab70cc95a3c6c"} Jan 26 17:09:25 crc kubenswrapper[4856]: I0126 17:09:25.030827 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:25 crc kubenswrapper[4856]: I0126 17:09:25.030840 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:25 crc kubenswrapper[4856]: I0126 17:09:25.030849 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:25 crc kubenswrapper[4856]: I0126 17:09:25.054355 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:25 crc kubenswrapper[4856]: I0126 17:09:25.059277 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" podStartSLOduration=7.059249732 podStartE2EDuration="7.059249732s" podCreationTimestamp="2026-01-26 17:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:09:25.057688437 +0000 UTC m=+661.010942428" watchObservedRunningTime="2026-01-26 17:09:25.059249732 +0000 UTC m=+661.012503723" Jan 26 17:09:25 crc kubenswrapper[4856]: I0126 17:09:25.060962 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:09:25 crc kubenswrapper[4856]: I0126 17:09:25.827083 4856 scope.go:117] "RemoveContainer" containerID="67c41d7af13d33af9423d069e86b531ff9d226b1435b62347517f490f3904943" Jan 26 17:09:25 crc kubenswrapper[4856]: I0126 17:09:25.848408 4856 scope.go:117] "RemoveContainer" containerID="d684603f69b61a1ce87ec7d1d3ef00e518372571ee64ede6a51ce75afd2227ca" Jan 26 17:09:25 crc kubenswrapper[4856]: I0126 17:09:25.869358 4856 scope.go:117] "RemoveContainer" containerID="afeb20035224feeab28a92ac77b43a24e653e49c56a25590a9861019a2b7a8ff" Jan 26 17:09:26 crc kubenswrapper[4856]: I0126 17:09:26.939201 4856 patch_prober.go:28] interesting pod/machine-config-daemon-xm9cq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:09:26 crc kubenswrapper[4856]: I0126 17:09:26.941228 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:09:26 crc kubenswrapper[4856]: I0126 17:09:26.941466 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" Jan 26 17:09:26 crc kubenswrapper[4856]: I0126 17:09:26.942616 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fe42c0299ac9f35a2260caaf7226f7e2161da013442117dab0d25a7c69c46115"} pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 17:09:26 crc kubenswrapper[4856]: I0126 17:09:26.942920 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" containerName="machine-config-daemon" containerID="cri-o://fe42c0299ac9f35a2260caaf7226f7e2161da013442117dab0d25a7c69c46115" gracePeriod=600 Jan 26 17:09:27 crc kubenswrapper[4856]: I0126 17:09:27.044610 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rq622_7a742e7b-c420-46e3-9e96-e9c744af6124/kube-multus/2.log" Jan 26 17:09:28 crc kubenswrapper[4856]: I0126 17:09:28.054728 4856 generic.go:334] "Generic (PLEG): container finished" podID="63c75ede-5170-4db0-811b-5217ef8d72b3" containerID="fe42c0299ac9f35a2260caaf7226f7e2161da013442117dab0d25a7c69c46115" exitCode=0 Jan 26 17:09:28 crc kubenswrapper[4856]: I0126 17:09:28.054848 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" event={"ID":"63c75ede-5170-4db0-811b-5217ef8d72b3","Type":"ContainerDied","Data":"fe42c0299ac9f35a2260caaf7226f7e2161da013442117dab0d25a7c69c46115"} Jan 26 17:09:28 crc kubenswrapper[4856]: I0126 17:09:28.055067 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" event={"ID":"63c75ede-5170-4db0-811b-5217ef8d72b3","Type":"ContainerStarted","Data":"bb3fb578d0ea2b4eb264b402043faa4d1923f5d38749a2ee2c65b084c2e291bd"} Jan 26 17:09:28 crc kubenswrapper[4856]: I0126 17:09:28.055117 4856 scope.go:117] "RemoveContainer" containerID="9758bfdfd1807e791935ac7ec93246863e5867351e35d27ffaff68ae79110e9c" Jan 26 17:09:34 crc kubenswrapper[4856]: I0126 17:09:34.396001 4856 scope.go:117] "RemoveContainer" containerID="ddec0dbea657c6160cfdfd78886d5ae335dab8b667b0e0e3813dffa86a2ae2dc" Jan 26 17:09:34 crc kubenswrapper[4856]: E0126 17:09:34.396945 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-rq622_openshift-multus(7a742e7b-c420-46e3-9e96-e9c744af6124)\"" pod="openshift-multus/multus-rq622" podUID="7a742e7b-c420-46e3-9e96-e9c744af6124" Jan 26 17:09:46 crc kubenswrapper[4856]: I0126 17:09:46.395383 4856 scope.go:117] "RemoveContainer" containerID="ddec0dbea657c6160cfdfd78886d5ae335dab8b667b0e0e3813dffa86a2ae2dc" Jan 26 17:09:47 crc kubenswrapper[4856]: I0126 17:09:47.189977 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rq622_7a742e7b-c420-46e3-9e96-e9c744af6124/kube-multus/2.log" Jan 26 17:09:47 crc kubenswrapper[4856]: I0126 17:09:47.190343 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rq622" event={"ID":"7a742e7b-c420-46e3-9e96-e9c744af6124","Type":"ContainerStarted","Data":"443165ce1d5496709cff016aaa51725cac9a85718dc182fc9666e5c69f45c262"} Jan 26 17:09:48 crc kubenswrapper[4856]: I0126 17:09:48.638361 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6t25x" Jan 26 17:10:39 crc kubenswrapper[4856]: I0126 17:10:39.699767 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-97mff"] Jan 26 17:10:39 crc kubenswrapper[4856]: I0126 17:10:39.702516 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-97mff" podUID="886857c0-659b-4904-b75a-c55c3f712747" containerName="registry-server" containerID="cri-o://86283045d7d1049d9d8358f985c6aac8c275ef1f0b7a9715b13fd30bd1c328e7" gracePeriod=30 Jan 26 17:10:40 crc kubenswrapper[4856]: I0126 17:10:40.019857 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-97mff" Jan 26 17:10:40 crc kubenswrapper[4856]: I0126 17:10:40.209385 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9487\" (UniqueName: \"kubernetes.io/projected/886857c0-659b-4904-b75a-c55c3f712747-kube-api-access-q9487\") pod \"886857c0-659b-4904-b75a-c55c3f712747\" (UID: \"886857c0-659b-4904-b75a-c55c3f712747\") " Jan 26 17:10:40 crc kubenswrapper[4856]: I0126 17:10:40.209503 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/886857c0-659b-4904-b75a-c55c3f712747-utilities\") pod \"886857c0-659b-4904-b75a-c55c3f712747\" (UID: \"886857c0-659b-4904-b75a-c55c3f712747\") " Jan 26 17:10:40 crc kubenswrapper[4856]: I0126 17:10:40.209629 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/886857c0-659b-4904-b75a-c55c3f712747-catalog-content\") pod \"886857c0-659b-4904-b75a-c55c3f712747\" (UID: \"886857c0-659b-4904-b75a-c55c3f712747\") " Jan 26 17:10:40 crc kubenswrapper[4856]: I0126 17:10:40.210884 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/886857c0-659b-4904-b75a-c55c3f712747-utilities" (OuterVolumeSpecName: "utilities") pod "886857c0-659b-4904-b75a-c55c3f712747" (UID: "886857c0-659b-4904-b75a-c55c3f712747"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:10:40 crc kubenswrapper[4856]: I0126 17:10:40.217815 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/886857c0-659b-4904-b75a-c55c3f712747-kube-api-access-q9487" (OuterVolumeSpecName: "kube-api-access-q9487") pod "886857c0-659b-4904-b75a-c55c3f712747" (UID: "886857c0-659b-4904-b75a-c55c3f712747"). InnerVolumeSpecName "kube-api-access-q9487". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:10:40 crc kubenswrapper[4856]: I0126 17:10:40.234950 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/886857c0-659b-4904-b75a-c55c3f712747-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "886857c0-659b-4904-b75a-c55c3f712747" (UID: "886857c0-659b-4904-b75a-c55c3f712747"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:10:40 crc kubenswrapper[4856]: I0126 17:10:40.311490 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9487\" (UniqueName: \"kubernetes.io/projected/886857c0-659b-4904-b75a-c55c3f712747-kube-api-access-q9487\") on node \"crc\" DevicePath \"\"" Jan 26 17:10:40 crc kubenswrapper[4856]: I0126 17:10:40.311574 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/886857c0-659b-4904-b75a-c55c3f712747-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:10:40 crc kubenswrapper[4856]: I0126 17:10:40.311588 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/886857c0-659b-4904-b75a-c55c3f712747-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:10:40 crc kubenswrapper[4856]: I0126 17:10:40.554503 4856 generic.go:334] "Generic (PLEG): container finished" podID="886857c0-659b-4904-b75a-c55c3f712747" containerID="86283045d7d1049d9d8358f985c6aac8c275ef1f0b7a9715b13fd30bd1c328e7" exitCode=0 Jan 26 17:10:40 crc kubenswrapper[4856]: I0126 17:10:40.554890 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-97mff" event={"ID":"886857c0-659b-4904-b75a-c55c3f712747","Type":"ContainerDied","Data":"86283045d7d1049d9d8358f985c6aac8c275ef1f0b7a9715b13fd30bd1c328e7"} Jan 26 17:10:40 crc kubenswrapper[4856]: I0126 17:10:40.555091 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-97mff" event={"ID":"886857c0-659b-4904-b75a-c55c3f712747","Type":"ContainerDied","Data":"2063535f537d4fe37e3e34708f04c20619c6cc50b85697e69a9333b26c91a793"} Jan 26 17:10:40 crc kubenswrapper[4856]: I0126 17:10:40.555360 4856 scope.go:117] "RemoveContainer" containerID="86283045d7d1049d9d8358f985c6aac8c275ef1f0b7a9715b13fd30bd1c328e7" Jan 26 17:10:40 crc kubenswrapper[4856]: I0126 17:10:40.555804 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-97mff" Jan 26 17:10:40 crc kubenswrapper[4856]: I0126 17:10:40.578098 4856 scope.go:117] "RemoveContainer" containerID="a75ef75367730507a8b7594226c5e9d4e14716073f574dda81c029b084dafd94" Jan 26 17:10:40 crc kubenswrapper[4856]: I0126 17:10:40.600375 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-97mff"] Jan 26 17:10:40 crc kubenswrapper[4856]: I0126 17:10:40.602740 4856 scope.go:117] "RemoveContainer" containerID="8cce484e79d411777eb43ce1a40864e7613f816cb566efdd41677d117f9c3633" Jan 26 17:10:40 crc kubenswrapper[4856]: I0126 17:10:40.604927 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-97mff"] Jan 26 17:10:40 crc kubenswrapper[4856]: I0126 17:10:40.617053 4856 scope.go:117] "RemoveContainer" containerID="86283045d7d1049d9d8358f985c6aac8c275ef1f0b7a9715b13fd30bd1c328e7" Jan 26 17:10:40 crc kubenswrapper[4856]: E0126 17:10:40.617617 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86283045d7d1049d9d8358f985c6aac8c275ef1f0b7a9715b13fd30bd1c328e7\": container with ID starting with 86283045d7d1049d9d8358f985c6aac8c275ef1f0b7a9715b13fd30bd1c328e7 not found: ID does not exist" containerID="86283045d7d1049d9d8358f985c6aac8c275ef1f0b7a9715b13fd30bd1c328e7" Jan 26 17:10:40 crc kubenswrapper[4856]: I0126 17:10:40.617767 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86283045d7d1049d9d8358f985c6aac8c275ef1f0b7a9715b13fd30bd1c328e7"} err="failed to get container status \"86283045d7d1049d9d8358f985c6aac8c275ef1f0b7a9715b13fd30bd1c328e7\": rpc error: code = NotFound desc = could not find container \"86283045d7d1049d9d8358f985c6aac8c275ef1f0b7a9715b13fd30bd1c328e7\": container with ID starting with 86283045d7d1049d9d8358f985c6aac8c275ef1f0b7a9715b13fd30bd1c328e7 not found: ID does not exist" Jan 26 17:10:40 crc kubenswrapper[4856]: I0126 17:10:40.617859 4856 scope.go:117] "RemoveContainer" containerID="a75ef75367730507a8b7594226c5e9d4e14716073f574dda81c029b084dafd94" Jan 26 17:10:40 crc kubenswrapper[4856]: E0126 17:10:40.618331 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a75ef75367730507a8b7594226c5e9d4e14716073f574dda81c029b084dafd94\": container with ID starting with a75ef75367730507a8b7594226c5e9d4e14716073f574dda81c029b084dafd94 not found: ID does not exist" containerID="a75ef75367730507a8b7594226c5e9d4e14716073f574dda81c029b084dafd94" Jan 26 17:10:40 crc kubenswrapper[4856]: I0126 17:10:40.618422 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a75ef75367730507a8b7594226c5e9d4e14716073f574dda81c029b084dafd94"} err="failed to get container status \"a75ef75367730507a8b7594226c5e9d4e14716073f574dda81c029b084dafd94\": rpc error: code = NotFound desc = could not find container \"a75ef75367730507a8b7594226c5e9d4e14716073f574dda81c029b084dafd94\": container with ID starting with a75ef75367730507a8b7594226c5e9d4e14716073f574dda81c029b084dafd94 not found: ID does not exist" Jan 26 17:10:40 crc kubenswrapper[4856]: I0126 17:10:40.618484 4856 scope.go:117] "RemoveContainer" containerID="8cce484e79d411777eb43ce1a40864e7613f816cb566efdd41677d117f9c3633" Jan 26 17:10:40 crc kubenswrapper[4856]: E0126 17:10:40.619030 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cce484e79d411777eb43ce1a40864e7613f816cb566efdd41677d117f9c3633\": container with ID starting with 8cce484e79d411777eb43ce1a40864e7613f816cb566efdd41677d117f9c3633 not found: ID does not exist" containerID="8cce484e79d411777eb43ce1a40864e7613f816cb566efdd41677d117f9c3633" Jan 26 17:10:40 crc kubenswrapper[4856]: I0126 17:10:40.619086 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cce484e79d411777eb43ce1a40864e7613f816cb566efdd41677d117f9c3633"} err="failed to get container status \"8cce484e79d411777eb43ce1a40864e7613f816cb566efdd41677d117f9c3633\": rpc error: code = NotFound desc = could not find container \"8cce484e79d411777eb43ce1a40864e7613f816cb566efdd41677d117f9c3633\": container with ID starting with 8cce484e79d411777eb43ce1a40864e7613f816cb566efdd41677d117f9c3633 not found: ID does not exist" Jan 26 17:10:41 crc kubenswrapper[4856]: I0126 17:10:41.406272 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="886857c0-659b-4904-b75a-c55c3f712747" path="/var/lib/kubelet/pods/886857c0-659b-4904-b75a-c55c3f712747/volumes" Jan 26 17:10:43 crc kubenswrapper[4856]: I0126 17:10:43.922960 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08924r9"] Jan 26 17:10:43 crc kubenswrapper[4856]: E0126 17:10:43.923704 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="886857c0-659b-4904-b75a-c55c3f712747" containerName="extract-content" Jan 26 17:10:43 crc kubenswrapper[4856]: I0126 17:10:43.923728 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="886857c0-659b-4904-b75a-c55c3f712747" containerName="extract-content" Jan 26 17:10:43 crc kubenswrapper[4856]: E0126 17:10:43.923746 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="886857c0-659b-4904-b75a-c55c3f712747" containerName="extract-utilities" Jan 26 17:10:43 crc kubenswrapper[4856]: I0126 17:10:43.923760 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="886857c0-659b-4904-b75a-c55c3f712747" containerName="extract-utilities" Jan 26 17:10:43 crc kubenswrapper[4856]: E0126 17:10:43.923770 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="886857c0-659b-4904-b75a-c55c3f712747" containerName="registry-server" Jan 26 17:10:43 crc kubenswrapper[4856]: I0126 17:10:43.923777 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="886857c0-659b-4904-b75a-c55c3f712747" containerName="registry-server" Jan 26 17:10:43 crc kubenswrapper[4856]: I0126 17:10:43.923980 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="886857c0-659b-4904-b75a-c55c3f712747" containerName="registry-server" Jan 26 17:10:43 crc kubenswrapper[4856]: I0126 17:10:43.925109 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08924r9" Jan 26 17:10:43 crc kubenswrapper[4856]: I0126 17:10:43.928247 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 17:10:43 crc kubenswrapper[4856]: I0126 17:10:43.937904 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08924r9"] Jan 26 17:10:44 crc kubenswrapper[4856]: I0126 17:10:44.001327 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bfqg\" (UniqueName: \"kubernetes.io/projected/64c65d72-3459-4893-a33a-9033e12f188a-kube-api-access-8bfqg\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08924r9\" (UID: \"64c65d72-3459-4893-a33a-9033e12f188a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08924r9" Jan 26 17:10:44 crc kubenswrapper[4856]: I0126 17:10:44.001444 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/64c65d72-3459-4893-a33a-9033e12f188a-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08924r9\" (UID: \"64c65d72-3459-4893-a33a-9033e12f188a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08924r9" Jan 26 17:10:44 crc kubenswrapper[4856]: I0126 17:10:44.001494 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/64c65d72-3459-4893-a33a-9033e12f188a-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08924r9\" (UID: \"64c65d72-3459-4893-a33a-9033e12f188a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08924r9" Jan 26 17:10:44 crc kubenswrapper[4856]: I0126 17:10:44.102601 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bfqg\" (UniqueName: \"kubernetes.io/projected/64c65d72-3459-4893-a33a-9033e12f188a-kube-api-access-8bfqg\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08924r9\" (UID: \"64c65d72-3459-4893-a33a-9033e12f188a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08924r9" Jan 26 17:10:44 crc kubenswrapper[4856]: I0126 17:10:44.103208 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/64c65d72-3459-4893-a33a-9033e12f188a-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08924r9\" (UID: \"64c65d72-3459-4893-a33a-9033e12f188a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08924r9" Jan 26 17:10:44 crc kubenswrapper[4856]: I0126 17:10:44.103368 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/64c65d72-3459-4893-a33a-9033e12f188a-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08924r9\" (UID: \"64c65d72-3459-4893-a33a-9033e12f188a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08924r9" Jan 26 17:10:44 crc kubenswrapper[4856]: I0126 17:10:44.103786 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/64c65d72-3459-4893-a33a-9033e12f188a-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08924r9\" (UID: \"64c65d72-3459-4893-a33a-9033e12f188a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08924r9" Jan 26 17:10:44 crc kubenswrapper[4856]: I0126 17:10:44.103890 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/64c65d72-3459-4893-a33a-9033e12f188a-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08924r9\" (UID: \"64c65d72-3459-4893-a33a-9033e12f188a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08924r9" Jan 26 17:10:44 crc kubenswrapper[4856]: I0126 17:10:44.123131 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bfqg\" (UniqueName: \"kubernetes.io/projected/64c65d72-3459-4893-a33a-9033e12f188a-kube-api-access-8bfqg\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08924r9\" (UID: \"64c65d72-3459-4893-a33a-9033e12f188a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08924r9" Jan 26 17:10:44 crc kubenswrapper[4856]: I0126 17:10:44.241776 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08924r9" Jan 26 17:10:44 crc kubenswrapper[4856]: I0126 17:10:44.417347 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08924r9"] Jan 26 17:10:44 crc kubenswrapper[4856]: I0126 17:10:44.602961 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08924r9" event={"ID":"64c65d72-3459-4893-a33a-9033e12f188a","Type":"ContainerStarted","Data":"89890338723d888ef2be71ab9569ddc46c833b10c10f6fb75ebe4a541095d7fa"} Jan 26 17:10:44 crc kubenswrapper[4856]: I0126 17:10:44.603012 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08924r9" event={"ID":"64c65d72-3459-4893-a33a-9033e12f188a","Type":"ContainerStarted","Data":"3f751138bd790ab355d7a769821c2ba507a3c1d559249b5a974c3d17ab565a17"} Jan 26 17:10:45 crc kubenswrapper[4856]: I0126 17:10:45.610376 4856 generic.go:334] "Generic (PLEG): container finished" podID="64c65d72-3459-4893-a33a-9033e12f188a" containerID="89890338723d888ef2be71ab9569ddc46c833b10c10f6fb75ebe4a541095d7fa" exitCode=0 Jan 26 17:10:45 crc kubenswrapper[4856]: I0126 17:10:45.610766 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08924r9" event={"ID":"64c65d72-3459-4893-a33a-9033e12f188a","Type":"ContainerDied","Data":"89890338723d888ef2be71ab9569ddc46c833b10c10f6fb75ebe4a541095d7fa"} Jan 26 17:10:45 crc kubenswrapper[4856]: I0126 17:10:45.615103 4856 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 17:10:49 crc kubenswrapper[4856]: I0126 17:10:49.634320 4856 generic.go:334] "Generic (PLEG): container finished" podID="64c65d72-3459-4893-a33a-9033e12f188a" containerID="78251e21df888156bd730c79137b6f0a500a76e4efd92c48da4997db63024cb0" exitCode=0 Jan 26 17:10:49 crc kubenswrapper[4856]: I0126 17:10:49.634441 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08924r9" event={"ID":"64c65d72-3459-4893-a33a-9033e12f188a","Type":"ContainerDied","Data":"78251e21df888156bd730c79137b6f0a500a76e4efd92c48da4997db63024cb0"} Jan 26 17:10:50 crc kubenswrapper[4856]: I0126 17:10:50.642907 4856 generic.go:334] "Generic (PLEG): container finished" podID="64c65d72-3459-4893-a33a-9033e12f188a" containerID="1881326ff739ce94ce545803f5d57dc24b0f53ff04a2bbb6f44ae945527f62ea" exitCode=0 Jan 26 17:10:50 crc kubenswrapper[4856]: I0126 17:10:50.642961 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08924r9" event={"ID":"64c65d72-3459-4893-a33a-9033e12f188a","Type":"ContainerDied","Data":"1881326ff739ce94ce545803f5d57dc24b0f53ff04a2bbb6f44ae945527f62ea"} Jan 26 17:10:51 crc kubenswrapper[4856]: I0126 17:10:51.916142 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08924r9" Jan 26 17:10:51 crc kubenswrapper[4856]: I0126 17:10:51.931228 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/64c65d72-3459-4893-a33a-9033e12f188a-util\") pod \"64c65d72-3459-4893-a33a-9033e12f188a\" (UID: \"64c65d72-3459-4893-a33a-9033e12f188a\") " Jan 26 17:10:51 crc kubenswrapper[4856]: I0126 17:10:51.931386 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bfqg\" (UniqueName: \"kubernetes.io/projected/64c65d72-3459-4893-a33a-9033e12f188a-kube-api-access-8bfqg\") pod \"64c65d72-3459-4893-a33a-9033e12f188a\" (UID: \"64c65d72-3459-4893-a33a-9033e12f188a\") " Jan 26 17:10:51 crc kubenswrapper[4856]: I0126 17:10:51.931449 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/64c65d72-3459-4893-a33a-9033e12f188a-bundle\") pod \"64c65d72-3459-4893-a33a-9033e12f188a\" (UID: \"64c65d72-3459-4893-a33a-9033e12f188a\") " Jan 26 17:10:51 crc kubenswrapper[4856]: I0126 17:10:51.939281 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64c65d72-3459-4893-a33a-9033e12f188a-bundle" (OuterVolumeSpecName: "bundle") pod "64c65d72-3459-4893-a33a-9033e12f188a" (UID: "64c65d72-3459-4893-a33a-9033e12f188a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:10:51 crc kubenswrapper[4856]: I0126 17:10:51.946250 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64c65d72-3459-4893-a33a-9033e12f188a-util" (OuterVolumeSpecName: "util") pod "64c65d72-3459-4893-a33a-9033e12f188a" (UID: "64c65d72-3459-4893-a33a-9033e12f188a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:10:51 crc kubenswrapper[4856]: I0126 17:10:51.946711 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64c65d72-3459-4893-a33a-9033e12f188a-kube-api-access-8bfqg" (OuterVolumeSpecName: "kube-api-access-8bfqg") pod "64c65d72-3459-4893-a33a-9033e12f188a" (UID: "64c65d72-3459-4893-a33a-9033e12f188a"). InnerVolumeSpecName "kube-api-access-8bfqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:10:52 crc kubenswrapper[4856]: I0126 17:10:52.033350 4856 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/64c65d72-3459-4893-a33a-9033e12f188a-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 17:10:52 crc kubenswrapper[4856]: I0126 17:10:52.033390 4856 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/64c65d72-3459-4893-a33a-9033e12f188a-util\") on node \"crc\" DevicePath \"\"" Jan 26 17:10:52 crc kubenswrapper[4856]: I0126 17:10:52.033400 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bfqg\" (UniqueName: \"kubernetes.io/projected/64c65d72-3459-4893-a33a-9033e12f188a-kube-api-access-8bfqg\") on node \"crc\" DevicePath \"\"" Jan 26 17:10:52 crc kubenswrapper[4856]: I0126 17:10:52.673786 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e25mvq"] Jan 26 17:10:52 crc kubenswrapper[4856]: E0126 17:10:52.674113 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64c65d72-3459-4893-a33a-9033e12f188a" containerName="util" Jan 26 17:10:52 crc kubenswrapper[4856]: I0126 17:10:52.674136 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="64c65d72-3459-4893-a33a-9033e12f188a" containerName="util" Jan 26 17:10:52 crc kubenswrapper[4856]: E0126 17:10:52.674150 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64c65d72-3459-4893-a33a-9033e12f188a" containerName="extract" Jan 26 17:10:52 crc kubenswrapper[4856]: I0126 17:10:52.674160 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="64c65d72-3459-4893-a33a-9033e12f188a" containerName="extract" Jan 26 17:10:52 crc kubenswrapper[4856]: E0126 17:10:52.674177 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64c65d72-3459-4893-a33a-9033e12f188a" containerName="pull" Jan 26 17:10:52 crc kubenswrapper[4856]: I0126 17:10:52.674185 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="64c65d72-3459-4893-a33a-9033e12f188a" containerName="pull" Jan 26 17:10:52 crc kubenswrapper[4856]: I0126 17:10:52.674314 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="64c65d72-3459-4893-a33a-9033e12f188a" containerName="extract" Jan 26 17:10:52 crc kubenswrapper[4856]: I0126 17:10:52.675258 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e25mvq" Jan 26 17:10:52 crc kubenswrapper[4856]: I0126 17:10:52.684891 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e25mvq"] Jan 26 17:10:52 crc kubenswrapper[4856]: I0126 17:10:52.707561 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08924r9" event={"ID":"64c65d72-3459-4893-a33a-9033e12f188a","Type":"ContainerDied","Data":"3f751138bd790ab355d7a769821c2ba507a3c1d559249b5a974c3d17ab565a17"} Jan 26 17:10:52 crc kubenswrapper[4856]: I0126 17:10:52.707604 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f751138bd790ab355d7a769821c2ba507a3c1d559249b5a974c3d17ab565a17" Jan 26 17:10:52 crc kubenswrapper[4856]: I0126 17:10:52.707641 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08924r9" Jan 26 17:10:52 crc kubenswrapper[4856]: I0126 17:10:52.742816 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e25mvq\" (UID: \"26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e25mvq" Jan 26 17:10:52 crc kubenswrapper[4856]: I0126 17:10:52.743007 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2wfr\" (UniqueName: \"kubernetes.io/projected/26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a-kube-api-access-j2wfr\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e25mvq\" (UID: \"26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e25mvq" Jan 26 17:10:52 crc kubenswrapper[4856]: I0126 17:10:52.743322 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e25mvq\" (UID: \"26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e25mvq" Jan 26 17:10:52 crc kubenswrapper[4856]: I0126 17:10:52.845034 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e25mvq\" (UID: \"26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e25mvq" Jan 26 17:10:52 crc kubenswrapper[4856]: I0126 17:10:52.845110 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2wfr\" (UniqueName: \"kubernetes.io/projected/26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a-kube-api-access-j2wfr\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e25mvq\" (UID: \"26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e25mvq" Jan 26 17:10:52 crc kubenswrapper[4856]: I0126 17:10:52.845141 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e25mvq\" (UID: \"26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e25mvq" Jan 26 17:10:52 crc kubenswrapper[4856]: I0126 17:10:52.846158 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e25mvq\" (UID: \"26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e25mvq" Jan 26 17:10:52 crc kubenswrapper[4856]: I0126 17:10:52.846152 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e25mvq\" (UID: \"26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e25mvq" Jan 26 17:10:52 crc kubenswrapper[4856]: I0126 17:10:52.870678 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2wfr\" (UniqueName: \"kubernetes.io/projected/26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a-kube-api-access-j2wfr\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e25mvq\" (UID: \"26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e25mvq" Jan 26 17:10:52 crc kubenswrapper[4856]: I0126 17:10:52.990753 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e25mvq" Jan 26 17:10:53 crc kubenswrapper[4856]: I0126 17:10:53.168064 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e25mvq"] Jan 26 17:10:53 crc kubenswrapper[4856]: I0126 17:10:53.671130 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxrgwg"] Jan 26 17:10:53 crc kubenswrapper[4856]: I0126 17:10:53.673034 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxrgwg" Jan 26 17:10:53 crc kubenswrapper[4856]: I0126 17:10:53.684729 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxrgwg"] Jan 26 17:10:53 crc kubenswrapper[4856]: I0126 17:10:53.713397 4856 generic.go:334] "Generic (PLEG): container finished" podID="26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a" containerID="c757a6d599d238252cf69718c03a250c4bd64e682f5b5456a36c0a4aa37edbc8" exitCode=0 Jan 26 17:10:53 crc kubenswrapper[4856]: I0126 17:10:53.713439 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e25mvq" event={"ID":"26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a","Type":"ContainerDied","Data":"c757a6d599d238252cf69718c03a250c4bd64e682f5b5456a36c0a4aa37edbc8"} Jan 26 17:10:53 crc kubenswrapper[4856]: I0126 17:10:53.713466 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e25mvq" event={"ID":"26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a","Type":"ContainerStarted","Data":"13128ab773dafbdc20c6ca5345fca45949f139a13824371598c28655f37fe918"} Jan 26 17:10:53 crc kubenswrapper[4856]: I0126 17:10:53.756143 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7105e655-ab8e-4fc0-b205-0bafaa6d7d91-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxrgwg\" (UID: \"7105e655-ab8e-4fc0-b205-0bafaa6d7d91\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxrgwg" Jan 26 17:10:53 crc kubenswrapper[4856]: I0126 17:10:53.756334 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb25l\" (UniqueName: \"kubernetes.io/projected/7105e655-ab8e-4fc0-b205-0bafaa6d7d91-kube-api-access-tb25l\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxrgwg\" (UID: \"7105e655-ab8e-4fc0-b205-0bafaa6d7d91\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxrgwg" Jan 26 17:10:53 crc kubenswrapper[4856]: I0126 17:10:53.756398 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7105e655-ab8e-4fc0-b205-0bafaa6d7d91-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxrgwg\" (UID: \"7105e655-ab8e-4fc0-b205-0bafaa6d7d91\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxrgwg" Jan 26 17:10:53 crc kubenswrapper[4856]: I0126 17:10:53.857324 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb25l\" (UniqueName: \"kubernetes.io/projected/7105e655-ab8e-4fc0-b205-0bafaa6d7d91-kube-api-access-tb25l\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxrgwg\" (UID: \"7105e655-ab8e-4fc0-b205-0bafaa6d7d91\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxrgwg" Jan 26 17:10:53 crc kubenswrapper[4856]: I0126 17:10:53.857383 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7105e655-ab8e-4fc0-b205-0bafaa6d7d91-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxrgwg\" (UID: \"7105e655-ab8e-4fc0-b205-0bafaa6d7d91\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxrgwg" Jan 26 17:10:53 crc kubenswrapper[4856]: I0126 17:10:53.857436 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7105e655-ab8e-4fc0-b205-0bafaa6d7d91-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxrgwg\" (UID: \"7105e655-ab8e-4fc0-b205-0bafaa6d7d91\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxrgwg" Jan 26 17:10:53 crc kubenswrapper[4856]: I0126 17:10:53.857942 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7105e655-ab8e-4fc0-b205-0bafaa6d7d91-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxrgwg\" (UID: \"7105e655-ab8e-4fc0-b205-0bafaa6d7d91\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxrgwg" Jan 26 17:10:53 crc kubenswrapper[4856]: I0126 17:10:53.858614 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7105e655-ab8e-4fc0-b205-0bafaa6d7d91-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxrgwg\" (UID: \"7105e655-ab8e-4fc0-b205-0bafaa6d7d91\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxrgwg" Jan 26 17:10:53 crc kubenswrapper[4856]: I0126 17:10:53.879681 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb25l\" (UniqueName: \"kubernetes.io/projected/7105e655-ab8e-4fc0-b205-0bafaa6d7d91-kube-api-access-tb25l\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxrgwg\" (UID: \"7105e655-ab8e-4fc0-b205-0bafaa6d7d91\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxrgwg" Jan 26 17:10:53 crc kubenswrapper[4856]: I0126 17:10:53.993648 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxrgwg" Jan 26 17:10:54 crc kubenswrapper[4856]: I0126 17:10:54.184107 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxrgwg"] Jan 26 17:10:54 crc kubenswrapper[4856]: W0126 17:10:54.192855 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7105e655_ab8e_4fc0_b205_0bafaa6d7d91.slice/crio-56b1c09b7117c32b89d576119b3e75e1f8c7130cbb907853b17119459ef4324f WatchSource:0}: Error finding container 56b1c09b7117c32b89d576119b3e75e1f8c7130cbb907853b17119459ef4324f: Status 404 returned error can't find the container with id 56b1c09b7117c32b89d576119b3e75e1f8c7130cbb907853b17119459ef4324f Jan 26 17:10:54 crc kubenswrapper[4856]: I0126 17:10:54.726844 4856 generic.go:334] "Generic (PLEG): container finished" podID="7105e655-ab8e-4fc0-b205-0bafaa6d7d91" containerID="b3bfd31dd62bcd8a06a4db75b99fa7123e1c84f28625c8d87bd297a28e6a4deb" exitCode=0 Jan 26 17:10:54 crc kubenswrapper[4856]: I0126 17:10:54.726886 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxrgwg" event={"ID":"7105e655-ab8e-4fc0-b205-0bafaa6d7d91","Type":"ContainerDied","Data":"b3bfd31dd62bcd8a06a4db75b99fa7123e1c84f28625c8d87bd297a28e6a4deb"} Jan 26 17:10:54 crc kubenswrapper[4856]: I0126 17:10:54.726912 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxrgwg" event={"ID":"7105e655-ab8e-4fc0-b205-0bafaa6d7d91","Type":"ContainerStarted","Data":"56b1c09b7117c32b89d576119b3e75e1f8c7130cbb907853b17119459ef4324f"} Jan 26 17:10:55 crc kubenswrapper[4856]: I0126 17:10:55.734382 4856 generic.go:334] "Generic (PLEG): container finished" podID="26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a" containerID="3ab1e58078f8b846013be46f1e8be7d2a7305d073bb29980d5f81d71ab43e80d" exitCode=0 Jan 26 17:10:55 crc kubenswrapper[4856]: I0126 17:10:55.734692 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e25mvq" event={"ID":"26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a","Type":"ContainerDied","Data":"3ab1e58078f8b846013be46f1e8be7d2a7305d073bb29980d5f81d71ab43e80d"} Jan 26 17:10:56 crc kubenswrapper[4856]: I0126 17:10:56.748121 4856 generic.go:334] "Generic (PLEG): container finished" podID="26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a" containerID="ab5c287e0e97e2b7c9c9e2b637238f42096fcbb354e37e64c27c55ba2ae02e28" exitCode=0 Jan 26 17:10:56 crc kubenswrapper[4856]: I0126 17:10:56.748255 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e25mvq" event={"ID":"26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a","Type":"ContainerDied","Data":"ab5c287e0e97e2b7c9c9e2b637238f42096fcbb354e37e64c27c55ba2ae02e28"} Jan 26 17:10:57 crc kubenswrapper[4856]: I0126 17:10:57.757352 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxrgwg" event={"ID":"7105e655-ab8e-4fc0-b205-0bafaa6d7d91","Type":"ContainerStarted","Data":"058a4d4cd2beed1c9c1c61654d4e95d5a1b7ade304852c5ed1486704fbd2de57"} Jan 26 17:10:58 crc kubenswrapper[4856]: I0126 17:10:58.469022 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e25mvq" Jan 26 17:10:58 crc kubenswrapper[4856]: I0126 17:10:58.500136 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2wfr\" (UniqueName: \"kubernetes.io/projected/26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a-kube-api-access-j2wfr\") pod \"26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a\" (UID: \"26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a\") " Jan 26 17:10:58 crc kubenswrapper[4856]: I0126 17:10:58.500216 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a-util\") pod \"26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a\" (UID: \"26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a\") " Jan 26 17:10:58 crc kubenswrapper[4856]: I0126 17:10:58.500257 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a-bundle\") pod \"26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a\" (UID: \"26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a\") " Jan 26 17:10:58 crc kubenswrapper[4856]: I0126 17:10:58.501639 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a-bundle" (OuterVolumeSpecName: "bundle") pod "26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a" (UID: "26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:10:58 crc kubenswrapper[4856]: I0126 17:10:58.544636 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a-util" (OuterVolumeSpecName: "util") pod "26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a" (UID: "26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:10:58 crc kubenswrapper[4856]: I0126 17:10:58.551405 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a-kube-api-access-j2wfr" (OuterVolumeSpecName: "kube-api-access-j2wfr") pod "26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a" (UID: "26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a"). InnerVolumeSpecName "kube-api-access-j2wfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:10:58 crc kubenswrapper[4856]: I0126 17:10:58.601907 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2wfr\" (UniqueName: \"kubernetes.io/projected/26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a-kube-api-access-j2wfr\") on node \"crc\" DevicePath \"\"" Jan 26 17:10:58 crc kubenswrapper[4856]: I0126 17:10:58.601951 4856 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a-util\") on node \"crc\" DevicePath \"\"" Jan 26 17:10:58 crc kubenswrapper[4856]: I0126 17:10:58.601965 4856 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 17:10:58 crc kubenswrapper[4856]: I0126 17:10:58.766940 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e25mvq" event={"ID":"26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a","Type":"ContainerDied","Data":"13128ab773dafbdc20c6ca5345fca45949f139a13824371598c28655f37fe918"} Jan 26 17:10:58 crc kubenswrapper[4856]: I0126 17:10:58.766974 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e25mvq" Jan 26 17:10:58 crc kubenswrapper[4856]: I0126 17:10:58.766987 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13128ab773dafbdc20c6ca5345fca45949f139a13824371598c28655f37fe918" Jan 26 17:10:58 crc kubenswrapper[4856]: I0126 17:10:58.769253 4856 generic.go:334] "Generic (PLEG): container finished" podID="7105e655-ab8e-4fc0-b205-0bafaa6d7d91" containerID="058a4d4cd2beed1c9c1c61654d4e95d5a1b7ade304852c5ed1486704fbd2de57" exitCode=0 Jan 26 17:10:58 crc kubenswrapper[4856]: I0126 17:10:58.769303 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxrgwg" event={"ID":"7105e655-ab8e-4fc0-b205-0bafaa6d7d91","Type":"ContainerDied","Data":"058a4d4cd2beed1c9c1c61654d4e95d5a1b7ade304852c5ed1486704fbd2de57"} Jan 26 17:10:59 crc kubenswrapper[4856]: I0126 17:10:59.540013 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ank5l6"] Jan 26 17:10:59 crc kubenswrapper[4856]: E0126 17:10:59.540331 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a" containerName="extract" Jan 26 17:10:59 crc kubenswrapper[4856]: I0126 17:10:59.540353 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a" containerName="extract" Jan 26 17:10:59 crc kubenswrapper[4856]: E0126 17:10:59.540371 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a" containerName="pull" Jan 26 17:10:59 crc kubenswrapper[4856]: I0126 17:10:59.540380 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a" containerName="pull" Jan 26 17:10:59 crc kubenswrapper[4856]: E0126 17:10:59.540402 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a" containerName="util" Jan 26 17:10:59 crc kubenswrapper[4856]: I0126 17:10:59.540413 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a" containerName="util" Jan 26 17:10:59 crc kubenswrapper[4856]: I0126 17:10:59.540554 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a" containerName="extract" Jan 26 17:10:59 crc kubenswrapper[4856]: I0126 17:10:59.541552 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ank5l6" Jan 26 17:10:59 crc kubenswrapper[4856]: I0126 17:10:59.557964 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ank5l6"] Jan 26 17:10:59 crc kubenswrapper[4856]: I0126 17:10:59.618600 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6521dc23-8f4e-452f-ae3e-167424fa3ed2-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ank5l6\" (UID: \"6521dc23-8f4e-452f-ae3e-167424fa3ed2\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ank5l6" Jan 26 17:10:59 crc kubenswrapper[4856]: I0126 17:10:59.618704 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6521dc23-8f4e-452f-ae3e-167424fa3ed2-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ank5l6\" (UID: \"6521dc23-8f4e-452f-ae3e-167424fa3ed2\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ank5l6" Jan 26 17:10:59 crc kubenswrapper[4856]: I0126 17:10:59.618738 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vksql\" (UniqueName: \"kubernetes.io/projected/6521dc23-8f4e-452f-ae3e-167424fa3ed2-kube-api-access-vksql\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ank5l6\" (UID: \"6521dc23-8f4e-452f-ae3e-167424fa3ed2\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ank5l6" Jan 26 17:10:59 crc kubenswrapper[4856]: I0126 17:10:59.720231 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6521dc23-8f4e-452f-ae3e-167424fa3ed2-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ank5l6\" (UID: \"6521dc23-8f4e-452f-ae3e-167424fa3ed2\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ank5l6" Jan 26 17:10:59 crc kubenswrapper[4856]: I0126 17:10:59.720631 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6521dc23-8f4e-452f-ae3e-167424fa3ed2-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ank5l6\" (UID: \"6521dc23-8f4e-452f-ae3e-167424fa3ed2\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ank5l6" Jan 26 17:10:59 crc kubenswrapper[4856]: I0126 17:10:59.720662 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vksql\" (UniqueName: \"kubernetes.io/projected/6521dc23-8f4e-452f-ae3e-167424fa3ed2-kube-api-access-vksql\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ank5l6\" (UID: \"6521dc23-8f4e-452f-ae3e-167424fa3ed2\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ank5l6" Jan 26 17:10:59 crc kubenswrapper[4856]: I0126 17:10:59.720983 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6521dc23-8f4e-452f-ae3e-167424fa3ed2-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ank5l6\" (UID: \"6521dc23-8f4e-452f-ae3e-167424fa3ed2\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ank5l6" Jan 26 17:10:59 crc kubenswrapper[4856]: I0126 17:10:59.721187 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6521dc23-8f4e-452f-ae3e-167424fa3ed2-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ank5l6\" (UID: \"6521dc23-8f4e-452f-ae3e-167424fa3ed2\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ank5l6" Jan 26 17:10:59 crc kubenswrapper[4856]: I0126 17:10:59.752881 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vksql\" (UniqueName: \"kubernetes.io/projected/6521dc23-8f4e-452f-ae3e-167424fa3ed2-kube-api-access-vksql\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ank5l6\" (UID: \"6521dc23-8f4e-452f-ae3e-167424fa3ed2\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ank5l6" Jan 26 17:10:59 crc kubenswrapper[4856]: I0126 17:10:59.777156 4856 generic.go:334] "Generic (PLEG): container finished" podID="7105e655-ab8e-4fc0-b205-0bafaa6d7d91" containerID="7145619c0c090acd718be2468e255969f24b82973542e62ca9c51a6b03860c3e" exitCode=0 Jan 26 17:10:59 crc kubenswrapper[4856]: I0126 17:10:59.777205 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxrgwg" event={"ID":"7105e655-ab8e-4fc0-b205-0bafaa6d7d91","Type":"ContainerDied","Data":"7145619c0c090acd718be2468e255969f24b82973542e62ca9c51a6b03860c3e"} Jan 26 17:10:59 crc kubenswrapper[4856]: I0126 17:10:59.924504 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ank5l6" Jan 26 17:11:01 crc kubenswrapper[4856]: I0126 17:11:01.362156 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ank5l6"] Jan 26 17:11:01 crc kubenswrapper[4856]: I0126 17:11:01.534837 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxrgwg" Jan 26 17:11:01 crc kubenswrapper[4856]: I0126 17:11:01.611640 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7105e655-ab8e-4fc0-b205-0bafaa6d7d91-util\") pod \"7105e655-ab8e-4fc0-b205-0bafaa6d7d91\" (UID: \"7105e655-ab8e-4fc0-b205-0bafaa6d7d91\") " Jan 26 17:11:01 crc kubenswrapper[4856]: I0126 17:11:01.611832 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7105e655-ab8e-4fc0-b205-0bafaa6d7d91-bundle\") pod \"7105e655-ab8e-4fc0-b205-0bafaa6d7d91\" (UID: \"7105e655-ab8e-4fc0-b205-0bafaa6d7d91\") " Jan 26 17:11:01 crc kubenswrapper[4856]: I0126 17:11:01.611876 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tb25l\" (UniqueName: \"kubernetes.io/projected/7105e655-ab8e-4fc0-b205-0bafaa6d7d91-kube-api-access-tb25l\") pod \"7105e655-ab8e-4fc0-b205-0bafaa6d7d91\" (UID: \"7105e655-ab8e-4fc0-b205-0bafaa6d7d91\") " Jan 26 17:11:01 crc kubenswrapper[4856]: I0126 17:11:01.612963 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7105e655-ab8e-4fc0-b205-0bafaa6d7d91-bundle" (OuterVolumeSpecName: "bundle") pod "7105e655-ab8e-4fc0-b205-0bafaa6d7d91" (UID: "7105e655-ab8e-4fc0-b205-0bafaa6d7d91"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:11:01 crc kubenswrapper[4856]: I0126 17:11:01.632784 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7105e655-ab8e-4fc0-b205-0bafaa6d7d91-kube-api-access-tb25l" (OuterVolumeSpecName: "kube-api-access-tb25l") pod "7105e655-ab8e-4fc0-b205-0bafaa6d7d91" (UID: "7105e655-ab8e-4fc0-b205-0bafaa6d7d91"). InnerVolumeSpecName "kube-api-access-tb25l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:11:01 crc kubenswrapper[4856]: I0126 17:11:01.636673 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7105e655-ab8e-4fc0-b205-0bafaa6d7d91-util" (OuterVolumeSpecName: "util") pod "7105e655-ab8e-4fc0-b205-0bafaa6d7d91" (UID: "7105e655-ab8e-4fc0-b205-0bafaa6d7d91"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:11:01 crc kubenswrapper[4856]: I0126 17:11:01.713766 4856 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7105e655-ab8e-4fc0-b205-0bafaa6d7d91-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 17:11:01 crc kubenswrapper[4856]: I0126 17:11:01.714118 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tb25l\" (UniqueName: \"kubernetes.io/projected/7105e655-ab8e-4fc0-b205-0bafaa6d7d91-kube-api-access-tb25l\") on node \"crc\" DevicePath \"\"" Jan 26 17:11:01 crc kubenswrapper[4856]: I0126 17:11:01.714131 4856 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7105e655-ab8e-4fc0-b205-0bafaa6d7d91-util\") on node \"crc\" DevicePath \"\"" Jan 26 17:11:01 crc kubenswrapper[4856]: I0126 17:11:01.874509 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ank5l6" event={"ID":"6521dc23-8f4e-452f-ae3e-167424fa3ed2","Type":"ContainerStarted","Data":"2a431074e034cf10e4752b83a72e67b08ab250c4805b0ef25b26d6818d7e9e5d"} Jan 26 17:11:01 crc kubenswrapper[4856]: I0126 17:11:01.876801 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxrgwg" event={"ID":"7105e655-ab8e-4fc0-b205-0bafaa6d7d91","Type":"ContainerDied","Data":"56b1c09b7117c32b89d576119b3e75e1f8c7130cbb907853b17119459ef4324f"} Jan 26 17:11:01 crc kubenswrapper[4856]: I0126 17:11:01.876865 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56b1c09b7117c32b89d576119b3e75e1f8c7130cbb907853b17119459ef4324f" Jan 26 17:11:01 crc kubenswrapper[4856]: I0126 17:11:01.876862 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxrgwg" Jan 26 17:11:02 crc kubenswrapper[4856]: I0126 17:11:02.963652 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ank5l6" event={"ID":"6521dc23-8f4e-452f-ae3e-167424fa3ed2","Type":"ContainerStarted","Data":"c153773f3fa63e0d5a49b944e935649f372715f37305b42c32405d4d2a56f4ad"} Jan 26 17:11:03 crc kubenswrapper[4856]: I0126 17:11:03.971455 4856 generic.go:334] "Generic (PLEG): container finished" podID="6521dc23-8f4e-452f-ae3e-167424fa3ed2" containerID="c153773f3fa63e0d5a49b944e935649f372715f37305b42c32405d4d2a56f4ad" exitCode=0 Jan 26 17:11:03 crc kubenswrapper[4856]: I0126 17:11:03.971603 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ank5l6" event={"ID":"6521dc23-8f4e-452f-ae3e-167424fa3ed2","Type":"ContainerDied","Data":"c153773f3fa63e0d5a49b944e935649f372715f37305b42c32405d4d2a56f4ad"} Jan 26 17:11:05 crc kubenswrapper[4856]: I0126 17:11:05.736241 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-cq2gx"] Jan 26 17:11:05 crc kubenswrapper[4856]: E0126 17:11:05.736830 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7105e655-ab8e-4fc0-b205-0bafaa6d7d91" containerName="pull" Jan 26 17:11:05 crc kubenswrapper[4856]: I0126 17:11:05.736845 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="7105e655-ab8e-4fc0-b205-0bafaa6d7d91" containerName="pull" Jan 26 17:11:05 crc kubenswrapper[4856]: E0126 17:11:05.736862 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7105e655-ab8e-4fc0-b205-0bafaa6d7d91" containerName="extract" Jan 26 17:11:05 crc kubenswrapper[4856]: I0126 17:11:05.736868 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="7105e655-ab8e-4fc0-b205-0bafaa6d7d91" containerName="extract" Jan 26 17:11:05 crc kubenswrapper[4856]: E0126 17:11:05.736878 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7105e655-ab8e-4fc0-b205-0bafaa6d7d91" containerName="util" Jan 26 17:11:05 crc kubenswrapper[4856]: I0126 17:11:05.736883 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="7105e655-ab8e-4fc0-b205-0bafaa6d7d91" containerName="util" Jan 26 17:11:05 crc kubenswrapper[4856]: I0126 17:11:05.736991 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="7105e655-ab8e-4fc0-b205-0bafaa6d7d91" containerName="extract" Jan 26 17:11:05 crc kubenswrapper[4856]: I0126 17:11:05.737353 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cq2gx" Jan 26 17:11:05 crc kubenswrapper[4856]: I0126 17:11:05.744644 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 26 17:11:05 crc kubenswrapper[4856]: I0126 17:11:05.751219 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-k99fx" Jan 26 17:11:05 crc kubenswrapper[4856]: I0126 17:11:05.751272 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 26 17:11:05 crc kubenswrapper[4856]: I0126 17:11:05.762068 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-cq2gx"] Jan 26 17:11:05 crc kubenswrapper[4856]: I0126 17:11:05.848407 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmwsq\" (UniqueName: \"kubernetes.io/projected/e31d2d53-8992-45e3-98aa-24ea73236248-kube-api-access-bmwsq\") pod \"obo-prometheus-operator-68bc856cb9-cq2gx\" (UID: \"e31d2d53-8992-45e3-98aa-24ea73236248\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cq2gx" Jan 26 17:11:05 crc kubenswrapper[4856]: I0126 17:11:05.889863 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-68d9fdc4dd-jbq25"] Jan 26 17:11:05 crc kubenswrapper[4856]: I0126 17:11:05.890971 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d9fdc4dd-jbq25" Jan 26 17:11:05 crc kubenswrapper[4856]: I0126 17:11:05.893443 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 26 17:11:05 crc kubenswrapper[4856]: I0126 17:11:05.896975 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-lmkrv" Jan 26 17:11:05 crc kubenswrapper[4856]: I0126 17:11:05.903205 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-68d9fdc4dd-cf7wn"] Jan 26 17:11:05 crc kubenswrapper[4856]: I0126 17:11:05.904100 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d9fdc4dd-cf7wn" Jan 26 17:11:05 crc kubenswrapper[4856]: I0126 17:11:05.915624 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-68d9fdc4dd-jbq25"] Jan 26 17:11:05 crc kubenswrapper[4856]: I0126 17:11:05.925644 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-68d9fdc4dd-cf7wn"] Jan 26 17:11:05 crc kubenswrapper[4856]: I0126 17:11:05.953626 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmwsq\" (UniqueName: \"kubernetes.io/projected/e31d2d53-8992-45e3-98aa-24ea73236248-kube-api-access-bmwsq\") pod \"obo-prometheus-operator-68bc856cb9-cq2gx\" (UID: \"e31d2d53-8992-45e3-98aa-24ea73236248\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cq2gx" Jan 26 17:11:05 crc kubenswrapper[4856]: I0126 17:11:05.953700 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7c88687f-1304-4709-b148-a196f0d0190d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-68d9fdc4dd-cf7wn\" (UID: \"7c88687f-1304-4709-b148-a196f0d0190d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d9fdc4dd-cf7wn" Jan 26 17:11:05 crc kubenswrapper[4856]: I0126 17:11:05.953802 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/766f50ba-0751-4f25-a6db-3b7195e72f55-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-68d9fdc4dd-jbq25\" (UID: \"766f50ba-0751-4f25-a6db-3b7195e72f55\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d9fdc4dd-jbq25" Jan 26 17:11:05 crc kubenswrapper[4856]: I0126 17:11:05.953864 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/766f50ba-0751-4f25-a6db-3b7195e72f55-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-68d9fdc4dd-jbq25\" (UID: \"766f50ba-0751-4f25-a6db-3b7195e72f55\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d9fdc4dd-jbq25" Jan 26 17:11:05 crc kubenswrapper[4856]: I0126 17:11:05.953896 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c88687f-1304-4709-b148-a196f0d0190d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-68d9fdc4dd-cf7wn\" (UID: \"7c88687f-1304-4709-b148-a196f0d0190d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d9fdc4dd-cf7wn" Jan 26 17:11:05 crc kubenswrapper[4856]: I0126 17:11:05.978852 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmwsq\" (UniqueName: \"kubernetes.io/projected/e31d2d53-8992-45e3-98aa-24ea73236248-kube-api-access-bmwsq\") pod \"obo-prometheus-operator-68bc856cb9-cq2gx\" (UID: \"e31d2d53-8992-45e3-98aa-24ea73236248\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cq2gx" Jan 26 17:11:06 crc kubenswrapper[4856]: I0126 17:11:06.055473 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/766f50ba-0751-4f25-a6db-3b7195e72f55-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-68d9fdc4dd-jbq25\" (UID: \"766f50ba-0751-4f25-a6db-3b7195e72f55\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d9fdc4dd-jbq25" Jan 26 17:11:06 crc kubenswrapper[4856]: I0126 17:11:06.055534 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c88687f-1304-4709-b148-a196f0d0190d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-68d9fdc4dd-cf7wn\" (UID: \"7c88687f-1304-4709-b148-a196f0d0190d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d9fdc4dd-cf7wn" Jan 26 17:11:06 crc kubenswrapper[4856]: I0126 17:11:06.055569 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7c88687f-1304-4709-b148-a196f0d0190d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-68d9fdc4dd-cf7wn\" (UID: \"7c88687f-1304-4709-b148-a196f0d0190d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d9fdc4dd-cf7wn" Jan 26 17:11:06 crc kubenswrapper[4856]: I0126 17:11:06.055622 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/766f50ba-0751-4f25-a6db-3b7195e72f55-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-68d9fdc4dd-jbq25\" (UID: \"766f50ba-0751-4f25-a6db-3b7195e72f55\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d9fdc4dd-jbq25" Jan 26 17:11:06 crc kubenswrapper[4856]: I0126 17:11:06.056149 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cq2gx" Jan 26 17:11:06 crc kubenswrapper[4856]: I0126 17:11:06.059761 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7c88687f-1304-4709-b148-a196f0d0190d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-68d9fdc4dd-cf7wn\" (UID: \"7c88687f-1304-4709-b148-a196f0d0190d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d9fdc4dd-cf7wn" Jan 26 17:11:06 crc kubenswrapper[4856]: I0126 17:11:06.060207 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c88687f-1304-4709-b148-a196f0d0190d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-68d9fdc4dd-cf7wn\" (UID: \"7c88687f-1304-4709-b148-a196f0d0190d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d9fdc4dd-cf7wn" Jan 26 17:11:06 crc kubenswrapper[4856]: I0126 17:11:06.079102 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/766f50ba-0751-4f25-a6db-3b7195e72f55-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-68d9fdc4dd-jbq25\" (UID: \"766f50ba-0751-4f25-a6db-3b7195e72f55\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d9fdc4dd-jbq25" Jan 26 17:11:06 crc kubenswrapper[4856]: I0126 17:11:06.091182 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/766f50ba-0751-4f25-a6db-3b7195e72f55-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-68d9fdc4dd-jbq25\" (UID: \"766f50ba-0751-4f25-a6db-3b7195e72f55\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d9fdc4dd-jbq25" Jan 26 17:11:06 crc kubenswrapper[4856]: I0126 17:11:06.165899 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-fpn2h"] Jan 26 17:11:06 crc kubenswrapper[4856]: I0126 17:11:06.166667 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-fpn2h" Jan 26 17:11:06 crc kubenswrapper[4856]: I0126 17:11:06.171906 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 26 17:11:06 crc kubenswrapper[4856]: I0126 17:11:06.174220 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-8x6vt" Jan 26 17:11:06 crc kubenswrapper[4856]: I0126 17:11:06.182405 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-fpn2h"] Jan 26 17:11:06 crc kubenswrapper[4856]: I0126 17:11:06.205427 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d9fdc4dd-jbq25" Jan 26 17:11:06 crc kubenswrapper[4856]: I0126 17:11:06.217782 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d9fdc4dd-cf7wn" Jan 26 17:11:06 crc kubenswrapper[4856]: I0126 17:11:06.258799 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/a4ae7646-2afb-4ada-b8a4-d20a69f87949-observability-operator-tls\") pod \"observability-operator-59bdc8b94-fpn2h\" (UID: \"a4ae7646-2afb-4ada-b8a4-d20a69f87949\") " pod="openshift-operators/observability-operator-59bdc8b94-fpn2h" Jan 26 17:11:06 crc kubenswrapper[4856]: I0126 17:11:06.258867 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2ptr\" (UniqueName: \"kubernetes.io/projected/a4ae7646-2afb-4ada-b8a4-d20a69f87949-kube-api-access-n2ptr\") pod \"observability-operator-59bdc8b94-fpn2h\" (UID: \"a4ae7646-2afb-4ada-b8a4-d20a69f87949\") " pod="openshift-operators/observability-operator-59bdc8b94-fpn2h" Jan 26 17:11:06 crc kubenswrapper[4856]: I0126 17:11:06.304954 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-5bmfp"] Jan 26 17:11:06 crc kubenswrapper[4856]: I0126 17:11:06.306618 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-5bmfp" Jan 26 17:11:06 crc kubenswrapper[4856]: I0126 17:11:06.311124 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-6jlcg" Jan 26 17:11:06 crc kubenswrapper[4856]: I0126 17:11:06.349743 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-5bmfp"] Jan 26 17:11:06 crc kubenswrapper[4856]: I0126 17:11:06.427478 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmjhr\" (UniqueName: \"kubernetes.io/projected/bd7597f2-d44b-4e1b-ac60-b409985e3351-kube-api-access-tmjhr\") pod \"perses-operator-5bf474d74f-5bmfp\" (UID: \"bd7597f2-d44b-4e1b-ac60-b409985e3351\") " pod="openshift-operators/perses-operator-5bf474d74f-5bmfp" Jan 26 17:11:06 crc kubenswrapper[4856]: I0126 17:11:06.427616 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/bd7597f2-d44b-4e1b-ac60-b409985e3351-openshift-service-ca\") pod \"perses-operator-5bf474d74f-5bmfp\" (UID: \"bd7597f2-d44b-4e1b-ac60-b409985e3351\") " pod="openshift-operators/perses-operator-5bf474d74f-5bmfp" Jan 26 17:11:06 crc kubenswrapper[4856]: I0126 17:11:06.427717 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/a4ae7646-2afb-4ada-b8a4-d20a69f87949-observability-operator-tls\") pod \"observability-operator-59bdc8b94-fpn2h\" (UID: \"a4ae7646-2afb-4ada-b8a4-d20a69f87949\") " pod="openshift-operators/observability-operator-59bdc8b94-fpn2h" Jan 26 17:11:06 crc kubenswrapper[4856]: I0126 17:11:06.427808 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2ptr\" (UniqueName: \"kubernetes.io/projected/a4ae7646-2afb-4ada-b8a4-d20a69f87949-kube-api-access-n2ptr\") pod \"observability-operator-59bdc8b94-fpn2h\" (UID: \"a4ae7646-2afb-4ada-b8a4-d20a69f87949\") " pod="openshift-operators/observability-operator-59bdc8b94-fpn2h" Jan 26 17:11:06 crc kubenswrapper[4856]: I0126 17:11:06.447779 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/a4ae7646-2afb-4ada-b8a4-d20a69f87949-observability-operator-tls\") pod \"observability-operator-59bdc8b94-fpn2h\" (UID: \"a4ae7646-2afb-4ada-b8a4-d20a69f87949\") " pod="openshift-operators/observability-operator-59bdc8b94-fpn2h" Jan 26 17:11:06 crc kubenswrapper[4856]: I0126 17:11:06.450429 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2ptr\" (UniqueName: \"kubernetes.io/projected/a4ae7646-2afb-4ada-b8a4-d20a69f87949-kube-api-access-n2ptr\") pod \"observability-operator-59bdc8b94-fpn2h\" (UID: \"a4ae7646-2afb-4ada-b8a4-d20a69f87949\") " pod="openshift-operators/observability-operator-59bdc8b94-fpn2h" Jan 26 17:11:06 crc kubenswrapper[4856]: I0126 17:11:06.500476 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-fpn2h" Jan 26 17:11:06 crc kubenswrapper[4856]: I0126 17:11:06.533234 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmjhr\" (UniqueName: \"kubernetes.io/projected/bd7597f2-d44b-4e1b-ac60-b409985e3351-kube-api-access-tmjhr\") pod \"perses-operator-5bf474d74f-5bmfp\" (UID: \"bd7597f2-d44b-4e1b-ac60-b409985e3351\") " pod="openshift-operators/perses-operator-5bf474d74f-5bmfp" Jan 26 17:11:06 crc kubenswrapper[4856]: I0126 17:11:06.533313 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/bd7597f2-d44b-4e1b-ac60-b409985e3351-openshift-service-ca\") pod \"perses-operator-5bf474d74f-5bmfp\" (UID: \"bd7597f2-d44b-4e1b-ac60-b409985e3351\") " pod="openshift-operators/perses-operator-5bf474d74f-5bmfp" Jan 26 17:11:06 crc kubenswrapper[4856]: I0126 17:11:06.535188 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/bd7597f2-d44b-4e1b-ac60-b409985e3351-openshift-service-ca\") pod \"perses-operator-5bf474d74f-5bmfp\" (UID: \"bd7597f2-d44b-4e1b-ac60-b409985e3351\") " pod="openshift-operators/perses-operator-5bf474d74f-5bmfp" Jan 26 17:11:06 crc kubenswrapper[4856]: I0126 17:11:06.610388 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmjhr\" (UniqueName: \"kubernetes.io/projected/bd7597f2-d44b-4e1b-ac60-b409985e3351-kube-api-access-tmjhr\") pod \"perses-operator-5bf474d74f-5bmfp\" (UID: \"bd7597f2-d44b-4e1b-ac60-b409985e3351\") " pod="openshift-operators/perses-operator-5bf474d74f-5bmfp" Jan 26 17:11:06 crc kubenswrapper[4856]: I0126 17:11:06.686708 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-cq2gx"] Jan 26 17:11:06 crc kubenswrapper[4856]: I0126 17:11:06.691050 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-5bmfp" Jan 26 17:11:06 crc kubenswrapper[4856]: I0126 17:11:06.837065 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-68d9fdc4dd-cf7wn"] Jan 26 17:11:06 crc kubenswrapper[4856]: W0126 17:11:06.860801 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c88687f_1304_4709_b148_a196f0d0190d.slice/crio-3add66ed30f57595204f43560d849f6ef5b5e72b9812deeb4561fc229fac034a WatchSource:0}: Error finding container 3add66ed30f57595204f43560d849f6ef5b5e72b9812deeb4561fc229fac034a: Status 404 returned error can't find the container with id 3add66ed30f57595204f43560d849f6ef5b5e72b9812deeb4561fc229fac034a Jan 26 17:11:06 crc kubenswrapper[4856]: I0126 17:11:06.932901 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-fpn2h"] Jan 26 17:11:06 crc kubenswrapper[4856]: W0126 17:11:06.938469 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4ae7646_2afb_4ada_b8a4_d20a69f87949.slice/crio-0d3567099a83e901da442c5a280e90fc2283599dea6ed1bb6faac6fbd1659711 WatchSource:0}: Error finding container 0d3567099a83e901da442c5a280e90fc2283599dea6ed1bb6faac6fbd1659711: Status 404 returned error can't find the container with id 0d3567099a83e901da442c5a280e90fc2283599dea6ed1bb6faac6fbd1659711 Jan 26 17:11:06 crc kubenswrapper[4856]: I0126 17:11:06.958300 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-68d9fdc4dd-jbq25"] Jan 26 17:11:06 crc kubenswrapper[4856]: I0126 17:11:06.972231 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-5bmfp"] Jan 26 17:11:06 crc kubenswrapper[4856]: W0126 17:11:06.974670 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod766f50ba_0751_4f25_a6db_3b7195e72f55.slice/crio-058bea58322e620b766ee5384d671712d335f6e308c7fab9e9134a7e4f0b21f4 WatchSource:0}: Error finding container 058bea58322e620b766ee5384d671712d335f6e308c7fab9e9134a7e4f0b21f4: Status 404 returned error can't find the container with id 058bea58322e620b766ee5384d671712d335f6e308c7fab9e9134a7e4f0b21f4 Jan 26 17:11:06 crc kubenswrapper[4856]: W0126 17:11:06.978655 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd7597f2_d44b_4e1b_ac60_b409985e3351.slice/crio-de73b797d25e7aab5cc4db5082d70b93dde8cc46381630d4b1bb2cc17b926363 WatchSource:0}: Error finding container de73b797d25e7aab5cc4db5082d70b93dde8cc46381630d4b1bb2cc17b926363: Status 404 returned error can't find the container with id de73b797d25e7aab5cc4db5082d70b93dde8cc46381630d4b1bb2cc17b926363 Jan 26 17:11:07 crc kubenswrapper[4856]: I0126 17:11:07.042126 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cq2gx" event={"ID":"e31d2d53-8992-45e3-98aa-24ea73236248","Type":"ContainerStarted","Data":"89868836e252b26754a35e23094de18e5e17220691fdb6335046087453e4be01"} Jan 26 17:11:07 crc kubenswrapper[4856]: I0126 17:11:07.048616 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d9fdc4dd-cf7wn" event={"ID":"7c88687f-1304-4709-b148-a196f0d0190d","Type":"ContainerStarted","Data":"3add66ed30f57595204f43560d849f6ef5b5e72b9812deeb4561fc229fac034a"} Jan 26 17:11:07 crc kubenswrapper[4856]: I0126 17:11:07.049884 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-fpn2h" event={"ID":"a4ae7646-2afb-4ada-b8a4-d20a69f87949","Type":"ContainerStarted","Data":"0d3567099a83e901da442c5a280e90fc2283599dea6ed1bb6faac6fbd1659711"} Jan 26 17:11:07 crc kubenswrapper[4856]: I0126 17:11:07.051171 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-5bmfp" event={"ID":"bd7597f2-d44b-4e1b-ac60-b409985e3351","Type":"ContainerStarted","Data":"de73b797d25e7aab5cc4db5082d70b93dde8cc46381630d4b1bb2cc17b926363"} Jan 26 17:11:07 crc kubenswrapper[4856]: I0126 17:11:07.052477 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d9fdc4dd-jbq25" event={"ID":"766f50ba-0751-4f25-a6db-3b7195e72f55","Type":"ContainerStarted","Data":"058bea58322e620b766ee5384d671712d335f6e308c7fab9e9134a7e4f0b21f4"} Jan 26 17:11:08 crc kubenswrapper[4856]: I0126 17:11:08.599905 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-bf765cf6c-gbst6"] Jan 26 17:11:08 crc kubenswrapper[4856]: I0126 17:11:08.600948 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-bf765cf6c-gbst6" Jan 26 17:11:08 crc kubenswrapper[4856]: I0126 17:11:08.602881 4856 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-service-cert" Jan 26 17:11:08 crc kubenswrapper[4856]: I0126 17:11:08.603506 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"openshift-service-ca.crt" Jan 26 17:11:08 crc kubenswrapper[4856]: I0126 17:11:08.603783 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"kube-root-ca.crt" Jan 26 17:11:08 crc kubenswrapper[4856]: I0126 17:11:08.603802 4856 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-dockercfg-bxj5j" Jan 26 17:11:08 crc kubenswrapper[4856]: I0126 17:11:08.608641 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-bf765cf6c-gbst6"] Jan 26 17:11:08 crc kubenswrapper[4856]: I0126 17:11:08.666194 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9988655e-7b1f-443e-a102-68665719162a-apiservice-cert\") pod \"elastic-operator-bf765cf6c-gbst6\" (UID: \"9988655e-7b1f-443e-a102-68665719162a\") " pod="service-telemetry/elastic-operator-bf765cf6c-gbst6" Jan 26 17:11:08 crc kubenswrapper[4856]: I0126 17:11:08.666295 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7wld\" (UniqueName: \"kubernetes.io/projected/9988655e-7b1f-443e-a102-68665719162a-kube-api-access-z7wld\") pod \"elastic-operator-bf765cf6c-gbst6\" (UID: \"9988655e-7b1f-443e-a102-68665719162a\") " pod="service-telemetry/elastic-operator-bf765cf6c-gbst6" Jan 26 17:11:08 crc kubenswrapper[4856]: I0126 17:11:08.666328 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9988655e-7b1f-443e-a102-68665719162a-webhook-cert\") pod \"elastic-operator-bf765cf6c-gbst6\" (UID: \"9988655e-7b1f-443e-a102-68665719162a\") " pod="service-telemetry/elastic-operator-bf765cf6c-gbst6" Jan 26 17:11:08 crc kubenswrapper[4856]: I0126 17:11:08.767607 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7wld\" (UniqueName: \"kubernetes.io/projected/9988655e-7b1f-443e-a102-68665719162a-kube-api-access-z7wld\") pod \"elastic-operator-bf765cf6c-gbst6\" (UID: \"9988655e-7b1f-443e-a102-68665719162a\") " pod="service-telemetry/elastic-operator-bf765cf6c-gbst6" Jan 26 17:11:08 crc kubenswrapper[4856]: I0126 17:11:08.767664 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9988655e-7b1f-443e-a102-68665719162a-webhook-cert\") pod \"elastic-operator-bf765cf6c-gbst6\" (UID: \"9988655e-7b1f-443e-a102-68665719162a\") " pod="service-telemetry/elastic-operator-bf765cf6c-gbst6" Jan 26 17:11:08 crc kubenswrapper[4856]: I0126 17:11:08.767731 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9988655e-7b1f-443e-a102-68665719162a-apiservice-cert\") pod \"elastic-operator-bf765cf6c-gbst6\" (UID: \"9988655e-7b1f-443e-a102-68665719162a\") " pod="service-telemetry/elastic-operator-bf765cf6c-gbst6" Jan 26 17:11:08 crc kubenswrapper[4856]: I0126 17:11:08.776830 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9988655e-7b1f-443e-a102-68665719162a-apiservice-cert\") pod \"elastic-operator-bf765cf6c-gbst6\" (UID: \"9988655e-7b1f-443e-a102-68665719162a\") " pod="service-telemetry/elastic-operator-bf765cf6c-gbst6" Jan 26 17:11:08 crc kubenswrapper[4856]: I0126 17:11:08.777411 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9988655e-7b1f-443e-a102-68665719162a-webhook-cert\") pod \"elastic-operator-bf765cf6c-gbst6\" (UID: \"9988655e-7b1f-443e-a102-68665719162a\") " pod="service-telemetry/elastic-operator-bf765cf6c-gbst6" Jan 26 17:11:08 crc kubenswrapper[4856]: I0126 17:11:08.797886 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7wld\" (UniqueName: \"kubernetes.io/projected/9988655e-7b1f-443e-a102-68665719162a-kube-api-access-z7wld\") pod \"elastic-operator-bf765cf6c-gbst6\" (UID: \"9988655e-7b1f-443e-a102-68665719162a\") " pod="service-telemetry/elastic-operator-bf765cf6c-gbst6" Jan 26 17:11:08 crc kubenswrapper[4856]: I0126 17:11:08.932196 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-bf765cf6c-gbst6" Jan 26 17:11:11 crc kubenswrapper[4856]: I0126 17:11:11.556924 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-bf765cf6c-gbst6"] Jan 26 17:11:12 crc kubenswrapper[4856]: I0126 17:11:12.097874 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-bf765cf6c-gbst6" event={"ID":"9988655e-7b1f-443e-a102-68665719162a","Type":"ContainerStarted","Data":"233316c791f13f4d4271301bb5da38f318ff55fa2e90ea73e86d39aaa58b095b"} Jan 26 17:11:12 crc kubenswrapper[4856]: I0126 17:11:12.102405 4856 generic.go:334] "Generic (PLEG): container finished" podID="6521dc23-8f4e-452f-ae3e-167424fa3ed2" containerID="bf1c283db10cced461f31388c1dde7855a66c215b11b413948361c0c6d4c1c16" exitCode=0 Jan 26 17:11:12 crc kubenswrapper[4856]: I0126 17:11:12.102447 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ank5l6" event={"ID":"6521dc23-8f4e-452f-ae3e-167424fa3ed2","Type":"ContainerDied","Data":"bf1c283db10cced461f31388c1dde7855a66c215b11b413948361c0c6d4c1c16"} Jan 26 17:11:12 crc kubenswrapper[4856]: I0126 17:11:12.362975 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-k8bc4"] Jan 26 17:11:12 crc kubenswrapper[4856]: I0126 17:11:12.364221 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-k8bc4" Jan 26 17:11:12 crc kubenswrapper[4856]: I0126 17:11:12.368018 4856 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"interconnect-operator-dockercfg-bffzw" Jan 26 17:11:12 crc kubenswrapper[4856]: I0126 17:11:12.381066 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-k8bc4"] Jan 26 17:11:12 crc kubenswrapper[4856]: I0126 17:11:12.437629 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhz7k\" (UniqueName: \"kubernetes.io/projected/15358471-0b96-4792-aaaa-433823f9ed88-kube-api-access-qhz7k\") pod \"interconnect-operator-5bb49f789d-k8bc4\" (UID: \"15358471-0b96-4792-aaaa-433823f9ed88\") " pod="service-telemetry/interconnect-operator-5bb49f789d-k8bc4" Jan 26 17:11:12 crc kubenswrapper[4856]: I0126 17:11:12.538871 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhz7k\" (UniqueName: \"kubernetes.io/projected/15358471-0b96-4792-aaaa-433823f9ed88-kube-api-access-qhz7k\") pod \"interconnect-operator-5bb49f789d-k8bc4\" (UID: \"15358471-0b96-4792-aaaa-433823f9ed88\") " pod="service-telemetry/interconnect-operator-5bb49f789d-k8bc4" Jan 26 17:11:12 crc kubenswrapper[4856]: I0126 17:11:12.561907 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhz7k\" (UniqueName: \"kubernetes.io/projected/15358471-0b96-4792-aaaa-433823f9ed88-kube-api-access-qhz7k\") pod \"interconnect-operator-5bb49f789d-k8bc4\" (UID: \"15358471-0b96-4792-aaaa-433823f9ed88\") " pod="service-telemetry/interconnect-operator-5bb49f789d-k8bc4" Jan 26 17:11:12 crc kubenswrapper[4856]: I0126 17:11:12.690258 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-k8bc4" Jan 26 17:11:13 crc kubenswrapper[4856]: I0126 17:11:13.083746 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-k8bc4"] Jan 26 17:11:13 crc kubenswrapper[4856]: I0126 17:11:13.111044 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-k8bc4" event={"ID":"15358471-0b96-4792-aaaa-433823f9ed88","Type":"ContainerStarted","Data":"53478756d067fa5ddec5eb67ee04c2f13aedb648f04c0c046a78f3cca4c0a970"} Jan 26 17:11:13 crc kubenswrapper[4856]: I0126 17:11:13.121809 4856 generic.go:334] "Generic (PLEG): container finished" podID="6521dc23-8f4e-452f-ae3e-167424fa3ed2" containerID="1b9c15a7af0a5cddb04adabfe164e630c584bd42ccb27b1da654ea8393613616" exitCode=0 Jan 26 17:11:13 crc kubenswrapper[4856]: I0126 17:11:13.121891 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ank5l6" event={"ID":"6521dc23-8f4e-452f-ae3e-167424fa3ed2","Type":"ContainerDied","Data":"1b9c15a7af0a5cddb04adabfe164e630c584bd42ccb27b1da654ea8393613616"} Jan 26 17:11:19 crc kubenswrapper[4856]: I0126 17:11:19.824934 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ank5l6" Jan 26 17:11:19 crc kubenswrapper[4856]: I0126 17:11:19.872556 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vksql\" (UniqueName: \"kubernetes.io/projected/6521dc23-8f4e-452f-ae3e-167424fa3ed2-kube-api-access-vksql\") pod \"6521dc23-8f4e-452f-ae3e-167424fa3ed2\" (UID: \"6521dc23-8f4e-452f-ae3e-167424fa3ed2\") " Jan 26 17:11:19 crc kubenswrapper[4856]: I0126 17:11:19.872711 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6521dc23-8f4e-452f-ae3e-167424fa3ed2-util\") pod \"6521dc23-8f4e-452f-ae3e-167424fa3ed2\" (UID: \"6521dc23-8f4e-452f-ae3e-167424fa3ed2\") " Jan 26 17:11:19 crc kubenswrapper[4856]: I0126 17:11:19.872736 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6521dc23-8f4e-452f-ae3e-167424fa3ed2-bundle\") pod \"6521dc23-8f4e-452f-ae3e-167424fa3ed2\" (UID: \"6521dc23-8f4e-452f-ae3e-167424fa3ed2\") " Jan 26 17:11:19 crc kubenswrapper[4856]: I0126 17:11:19.874364 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6521dc23-8f4e-452f-ae3e-167424fa3ed2-bundle" (OuterVolumeSpecName: "bundle") pod "6521dc23-8f4e-452f-ae3e-167424fa3ed2" (UID: "6521dc23-8f4e-452f-ae3e-167424fa3ed2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:11:19 crc kubenswrapper[4856]: I0126 17:11:19.880311 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6521dc23-8f4e-452f-ae3e-167424fa3ed2-kube-api-access-vksql" (OuterVolumeSpecName: "kube-api-access-vksql") pod "6521dc23-8f4e-452f-ae3e-167424fa3ed2" (UID: "6521dc23-8f4e-452f-ae3e-167424fa3ed2"). InnerVolumeSpecName "kube-api-access-vksql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:11:19 crc kubenswrapper[4856]: I0126 17:11:19.886074 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6521dc23-8f4e-452f-ae3e-167424fa3ed2-util" (OuterVolumeSpecName: "util") pod "6521dc23-8f4e-452f-ae3e-167424fa3ed2" (UID: "6521dc23-8f4e-452f-ae3e-167424fa3ed2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:11:19 crc kubenswrapper[4856]: I0126 17:11:19.973747 4856 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6521dc23-8f4e-452f-ae3e-167424fa3ed2-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 17:11:19 crc kubenswrapper[4856]: I0126 17:11:19.973786 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vksql\" (UniqueName: \"kubernetes.io/projected/6521dc23-8f4e-452f-ae3e-167424fa3ed2-kube-api-access-vksql\") on node \"crc\" DevicePath \"\"" Jan 26 17:11:19 crc kubenswrapper[4856]: I0126 17:11:19.973807 4856 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6521dc23-8f4e-452f-ae3e-167424fa3ed2-util\") on node \"crc\" DevicePath \"\"" Jan 26 17:11:20 crc kubenswrapper[4856]: I0126 17:11:20.209034 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ank5l6" event={"ID":"6521dc23-8f4e-452f-ae3e-167424fa3ed2","Type":"ContainerDied","Data":"2a431074e034cf10e4752b83a72e67b08ab250c4805b0ef25b26d6818d7e9e5d"} Jan 26 17:11:20 crc kubenswrapper[4856]: I0126 17:11:20.209090 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a431074e034cf10e4752b83a72e67b08ab250c4805b0ef25b26d6818d7e9e5d" Jan 26 17:11:20 crc kubenswrapper[4856]: I0126 17:11:20.209183 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ank5l6" Jan 26 17:11:25 crc kubenswrapper[4856]: I0126 17:11:25.650308 4856 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 17:11:28 crc kubenswrapper[4856]: E0126 17:11:28.244784 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a" Jan 26 17:11:28 crc kubenswrapper[4856]: E0126 17:11:28.245307 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a,Command:[],Args:[--prometheus-config-reloader=$(RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER) --prometheus-instance-selector=app.kubernetes.io/managed-by=observability-operator --alertmanager-instance-selector=app.kubernetes.io/managed-by=observability-operator --thanos-ruler-instance-selector=app.kubernetes.io/managed-by=observability-operator --watch-referenced-objects-in-all-namespaces=true --disable-unmanaged-prometheus-configuration=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOGC,Value:30,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER,Value:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{157286400 0} {} 150Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bmwsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-68bc856cb9-cq2gx_openshift-operators(e31d2d53-8992-45e3-98aa-24ea73236248): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 17:11:28 crc kubenswrapper[4856]: E0126 17:11:28.246600 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cq2gx" podUID="e31d2d53-8992-45e3-98aa-24ea73236248" Jan 26 17:11:28 crc kubenswrapper[4856]: E0126 17:11:28.314628 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a\\\"\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cq2gx" podUID="e31d2d53-8992-45e3-98aa-24ea73236248" Jan 26 17:11:36 crc kubenswrapper[4856]: E0126 17:11:36.180306 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="registry.connect.redhat.com/elastic/eck-operator@sha256:28925fffef8f7c920b2510810cbcfc0f3dadab5f8a80b01fd5ae500e5c070105" Jan 26 17:11:36 crc kubenswrapper[4856]: E0126 17:11:36.181416 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:registry.connect.redhat.com/elastic/eck-operator@sha256:28925fffef8f7c920b2510810cbcfc0f3dadab5f8a80b01fd5ae500e5c070105,Command:[],Args:[manager --config=/conf/eck.yaml --manage-webhook-certs=false --enable-webhook --ubi-only --distribution-channel=certified-operators],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https-webhook,HostPort:0,ContainerPort:9443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NAMESPACES,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.annotations['olm.targetNamespaces'],},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.annotations['olm.operatorNamespace'],},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_IMAGE,Value:registry.connect.redhat.com/elastic/eck-operator@sha256:28925fffef8f7c920b2510810cbcfc0f3dadab5f8a80b01fd5ae500e5c070105,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:elasticsearch-eck-operator-certified.v3.2.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{1 0} {} 1 DecimalSI},memory: {{1073741824 0} {} 1Gi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{157286400 0} {} 150Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z7wld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod elastic-operator-bf765cf6c-gbst6_service-telemetry(9988655e-7b1f-443e-a102-68665719162a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 17:11:36 crc kubenswrapper[4856]: E0126 17:11:36.182633 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/elastic-operator-bf765cf6c-gbst6" podUID="9988655e-7b1f-443e-a102-68665719162a" Jan 26 17:11:36 crc kubenswrapper[4856]: E0126 17:11:36.333656 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck-operator@sha256:28925fffef8f7c920b2510810cbcfc0f3dadab5f8a80b01fd5ae500e5c070105\\\"\"" pod="service-telemetry/elastic-operator-bf765cf6c-gbst6" podUID="9988655e-7b1f-443e-a102-68665719162a" Jan 26 17:11:37 crc kubenswrapper[4856]: E0126 17:11:37.636497 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8" Jan 26 17:11:37 crc kubenswrapper[4856]: E0126 17:11:37.637182 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:perses-operator,Image:registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openshift-service-ca,ReadOnly:true,MountPath:/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tmjhr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod perses-operator-5bf474d74f-5bmfp_openshift-operators(bd7597f2-d44b-4e1b-ac60-b409985e3351): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 17:11:37 crc kubenswrapper[4856]: E0126 17:11:37.638432 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"perses-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/perses-operator-5bf474d74f-5bmfp" podUID="bd7597f2-d44b-4e1b-ac60-b409985e3351" Jan 26 17:11:37 crc kubenswrapper[4856]: E0126 17:11:37.655579 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" Jan 26 17:11:37 crc kubenswrapper[4856]: E0126 17:11:37.655882 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator-admission-webhook,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea,Command:[],Args:[--web.enable-tls=true --web.cert-file=/tmp/k8s-webhook-server/serving-certs/tls.crt --web.key-file=/tmp/k8s-webhook-server/serving-certs/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{209715200 0} {} BinarySI},},Requests:ResourceList{cpu: {{50 -3} {} 50m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-admission-webhook-68d9fdc4dd-cf7wn_openshift-operators(7c88687f-1304-4709-b148-a196f0d0190d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 17:11:37 crc kubenswrapper[4856]: E0126 17:11:37.657656 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d9fdc4dd-cf7wn" podUID="7c88687f-1304-4709-b148-a196f0d0190d" Jan 26 17:11:38 crc kubenswrapper[4856]: E0126 17:11:38.432349 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"perses-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8\\\"\"" pod="openshift-operators/perses-operator-5bf474d74f-5bmfp" podUID="bd7597f2-d44b-4e1b-ac60-b409985e3351" Jan 26 17:11:38 crc kubenswrapper[4856]: E0126 17:11:38.432570 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea\\\"\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d9fdc4dd-cf7wn" podUID="7c88687f-1304-4709-b148-a196f0d0190d" Jan 26 17:11:39 crc kubenswrapper[4856]: I0126 17:11:39.372554 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wl4gw"] Jan 26 17:11:39 crc kubenswrapper[4856]: E0126 17:11:39.372937 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6521dc23-8f4e-452f-ae3e-167424fa3ed2" containerName="util" Jan 26 17:11:39 crc kubenswrapper[4856]: I0126 17:11:39.372959 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="6521dc23-8f4e-452f-ae3e-167424fa3ed2" containerName="util" Jan 26 17:11:39 crc kubenswrapper[4856]: E0126 17:11:39.372970 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6521dc23-8f4e-452f-ae3e-167424fa3ed2" containerName="extract" Jan 26 17:11:39 crc kubenswrapper[4856]: I0126 17:11:39.372977 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="6521dc23-8f4e-452f-ae3e-167424fa3ed2" containerName="extract" Jan 26 17:11:39 crc kubenswrapper[4856]: E0126 17:11:39.372988 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6521dc23-8f4e-452f-ae3e-167424fa3ed2" containerName="pull" Jan 26 17:11:39 crc kubenswrapper[4856]: I0126 17:11:39.372995 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="6521dc23-8f4e-452f-ae3e-167424fa3ed2" containerName="pull" Jan 26 17:11:39 crc kubenswrapper[4856]: I0126 17:11:39.373231 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="6521dc23-8f4e-452f-ae3e-167424fa3ed2" containerName="extract" Jan 26 17:11:39 crc kubenswrapper[4856]: I0126 17:11:39.375854 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wl4gw" Jan 26 17:11:39 crc kubenswrapper[4856]: I0126 17:11:39.378370 4856 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-8gzcb" Jan 26 17:11:39 crc kubenswrapper[4856]: I0126 17:11:39.378739 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 26 17:11:39 crc kubenswrapper[4856]: I0126 17:11:39.379645 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 26 17:11:39 crc kubenswrapper[4856]: I0126 17:11:39.389674 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wl4gw"] Jan 26 17:11:39 crc kubenswrapper[4856]: I0126 17:11:39.459563 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a976d6d5-989b-49e8-bb9e-00c54dba078a-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-wl4gw\" (UID: \"a976d6d5-989b-49e8-bb9e-00c54dba078a\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wl4gw" Jan 26 17:11:39 crc kubenswrapper[4856]: I0126 17:11:39.459773 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sjx8\" (UniqueName: \"kubernetes.io/projected/a976d6d5-989b-49e8-bb9e-00c54dba078a-kube-api-access-6sjx8\") pod \"cert-manager-operator-controller-manager-5446d6888b-wl4gw\" (UID: \"a976d6d5-989b-49e8-bb9e-00c54dba078a\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wl4gw" Jan 26 17:11:39 crc kubenswrapper[4856]: I0126 17:11:39.561360 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sjx8\" (UniqueName: \"kubernetes.io/projected/a976d6d5-989b-49e8-bb9e-00c54dba078a-kube-api-access-6sjx8\") pod \"cert-manager-operator-controller-manager-5446d6888b-wl4gw\" (UID: \"a976d6d5-989b-49e8-bb9e-00c54dba078a\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wl4gw" Jan 26 17:11:39 crc kubenswrapper[4856]: I0126 17:11:39.561439 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a976d6d5-989b-49e8-bb9e-00c54dba078a-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-wl4gw\" (UID: \"a976d6d5-989b-49e8-bb9e-00c54dba078a\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wl4gw" Jan 26 17:11:39 crc kubenswrapper[4856]: I0126 17:11:39.561922 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a976d6d5-989b-49e8-bb9e-00c54dba078a-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-wl4gw\" (UID: \"a976d6d5-989b-49e8-bb9e-00c54dba078a\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wl4gw" Jan 26 17:11:39 crc kubenswrapper[4856]: I0126 17:11:39.585423 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sjx8\" (UniqueName: \"kubernetes.io/projected/a976d6d5-989b-49e8-bb9e-00c54dba078a-kube-api-access-6sjx8\") pod \"cert-manager-operator-controller-manager-5446d6888b-wl4gw\" (UID: \"a976d6d5-989b-49e8-bb9e-00c54dba078a\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wl4gw" Jan 26 17:11:39 crc kubenswrapper[4856]: I0126 17:11:39.697309 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wl4gw" Jan 26 17:11:43 crc kubenswrapper[4856]: E0126 17:11:43.249775 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/amq7/amq-interconnect-operator@sha256:a8b621237c872ded2a1d1d948fbebd693429e4a1ced1d7922406241a078d3d43" Jan 26 17:11:43 crc kubenswrapper[4856]: E0126 17:11:43.250020 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:interconnect-operator,Image:registry.redhat.io/amq7/amq-interconnect-operator@sha256:a8b621237c872ded2a1d1d948fbebd693429e4a1ced1d7922406241a078d3d43,Command:[qdr-operator],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:60000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:qdr-operator,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_QDROUTERD_IMAGE,Value:registry.redhat.io/amq7/amq-interconnect@sha256:31d87473fa684178a694f9ee331d3c80f2653f9533cb65c2a325752166a077e9,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:amq7-interconnect-operator.v1.10.20,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qhz7k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod interconnect-operator-5bb49f789d-k8bc4_service-telemetry(15358471-0b96-4792-aaaa-433823f9ed88): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 17:11:43 crc kubenswrapper[4856]: E0126 17:11:43.251226 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"interconnect-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="service-telemetry/interconnect-operator-5bb49f789d-k8bc4" podUID="15358471-0b96-4792-aaaa-433823f9ed88" Jan 26 17:11:43 crc kubenswrapper[4856]: E0126 17:11:43.499227 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"interconnect-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/amq7/amq-interconnect-operator@sha256:a8b621237c872ded2a1d1d948fbebd693429e4a1ced1d7922406241a078d3d43\\\"\"" pod="service-telemetry/interconnect-operator-5bb49f789d-k8bc4" podUID="15358471-0b96-4792-aaaa-433823f9ed88" Jan 26 17:11:43 crc kubenswrapper[4856]: I0126 17:11:43.544005 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wl4gw"] Jan 26 17:11:43 crc kubenswrapper[4856]: W0126 17:11:43.550033 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda976d6d5_989b_49e8_bb9e_00c54dba078a.slice/crio-c0badef7f49f1421504f385a874d4ddb46c0f67be5b4353c924de6ec8b397518 WatchSource:0}: Error finding container c0badef7f49f1421504f385a874d4ddb46c0f67be5b4353c924de6ec8b397518: Status 404 returned error can't find the container with id c0badef7f49f1421504f385a874d4ddb46c0f67be5b4353c924de6ec8b397518 Jan 26 17:11:44 crc kubenswrapper[4856]: I0126 17:11:44.504237 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d9fdc4dd-jbq25" event={"ID":"766f50ba-0751-4f25-a6db-3b7195e72f55","Type":"ContainerStarted","Data":"d4af2ae0fc9165956735abdaecbb6623aeb839ccd306f173315a8ca783224c65"} Jan 26 17:11:44 crc kubenswrapper[4856]: I0126 17:11:44.507441 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cq2gx" event={"ID":"e31d2d53-8992-45e3-98aa-24ea73236248","Type":"ContainerStarted","Data":"8236a64fbd4a9d4a0bc8353f3388c89b02598fe759ed8fd5b7bf123bcc6a0723"} Jan 26 17:11:44 crc kubenswrapper[4856]: I0126 17:11:44.511003 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-fpn2h" event={"ID":"a4ae7646-2afb-4ada-b8a4-d20a69f87949","Type":"ContainerStarted","Data":"5760274324fbd1a6f9babf8c0cd31646fd5ec80c55fb710d1c0b47542e1ef0f1"} Jan 26 17:11:44 crc kubenswrapper[4856]: I0126 17:11:44.511236 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-fpn2h" Jan 26 17:11:44 crc kubenswrapper[4856]: I0126 17:11:44.513466 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wl4gw" event={"ID":"a976d6d5-989b-49e8-bb9e-00c54dba078a","Type":"ContainerStarted","Data":"c0badef7f49f1421504f385a874d4ddb46c0f67be5b4353c924de6ec8b397518"} Jan 26 17:11:44 crc kubenswrapper[4856]: I0126 17:11:44.513668 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-fpn2h" Jan 26 17:11:44 crc kubenswrapper[4856]: I0126 17:11:44.537496 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d9fdc4dd-jbq25" podStartSLOduration=3.247349798 podStartE2EDuration="39.537461142s" podCreationTimestamp="2026-01-26 17:11:05 +0000 UTC" firstStartedPulling="2026-01-26 17:11:06.981105038 +0000 UTC m=+762.934359019" lastFinishedPulling="2026-01-26 17:11:43.271216382 +0000 UTC m=+799.224470363" observedRunningTime="2026-01-26 17:11:44.536454323 +0000 UTC m=+800.489708314" watchObservedRunningTime="2026-01-26 17:11:44.537461142 +0000 UTC m=+800.490715133" Jan 26 17:11:44 crc kubenswrapper[4856]: I0126 17:11:44.609974 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-fpn2h" podStartSLOduration=2.280301729 podStartE2EDuration="38.609955452s" podCreationTimestamp="2026-01-26 17:11:06 +0000 UTC" firstStartedPulling="2026-01-26 17:11:06.941563689 +0000 UTC m=+762.894817680" lastFinishedPulling="2026-01-26 17:11:43.271217422 +0000 UTC m=+799.224471403" observedRunningTime="2026-01-26 17:11:44.583604279 +0000 UTC m=+800.536858280" watchObservedRunningTime="2026-01-26 17:11:44.609955452 +0000 UTC m=+800.563209433" Jan 26 17:11:44 crc kubenswrapper[4856]: I0126 17:11:44.610261 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cq2gx" podStartSLOduration=3.035226025 podStartE2EDuration="39.61025685s" podCreationTimestamp="2026-01-26 17:11:05 +0000 UTC" firstStartedPulling="2026-01-26 17:11:06.73584922 +0000 UTC m=+762.689103201" lastFinishedPulling="2026-01-26 17:11:43.310880045 +0000 UTC m=+799.264134026" observedRunningTime="2026-01-26 17:11:44.606007509 +0000 UTC m=+800.559261500" watchObservedRunningTime="2026-01-26 17:11:44.61025685 +0000 UTC m=+800.563510821" Jan 26 17:11:53 crc kubenswrapper[4856]: I0126 17:11:53.575093 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wl4gw" event={"ID":"a976d6d5-989b-49e8-bb9e-00c54dba078a","Type":"ContainerStarted","Data":"21f00369be34e114c32e8e60d719fdedf1ab109abb693b3da0814d8133a1f758"} Jan 26 17:11:53 crc kubenswrapper[4856]: I0126 17:11:53.576677 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-bf765cf6c-gbst6" event={"ID":"9988655e-7b1f-443e-a102-68665719162a","Type":"ContainerStarted","Data":"74ec440aeea33ccacd27ac561070abd83a42057c3516d917c9525d4bdabe6435"} Jan 26 17:11:53 crc kubenswrapper[4856]: I0126 17:11:53.578644 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-5bmfp" event={"ID":"bd7597f2-d44b-4e1b-ac60-b409985e3351","Type":"ContainerStarted","Data":"66e10285f6d296dd2a1b78f58e5ac030da89973d1aac49cea6bf74e3ede6338e"} Jan 26 17:11:53 crc kubenswrapper[4856]: I0126 17:11:53.578940 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-5bmfp" Jan 26 17:11:53 crc kubenswrapper[4856]: I0126 17:11:53.579596 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d9fdc4dd-cf7wn" event={"ID":"7c88687f-1304-4709-b148-a196f0d0190d","Type":"ContainerStarted","Data":"e28c16cd32a7cfcd39588bf2118b2218eb24d94557f0688fdf1303b55eb3bcfd"} Jan 26 17:11:53 crc kubenswrapper[4856]: I0126 17:11:53.668141 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wl4gw" podStartSLOduration=5.043201361 podStartE2EDuration="14.668108066s" podCreationTimestamp="2026-01-26 17:11:39 +0000 UTC" firstStartedPulling="2026-01-26 17:11:43.553938573 +0000 UTC m=+799.507192554" lastFinishedPulling="2026-01-26 17:11:53.178845288 +0000 UTC m=+809.132099259" observedRunningTime="2026-01-26 17:11:53.661981051 +0000 UTC m=+809.615235032" watchObservedRunningTime="2026-01-26 17:11:53.668108066 +0000 UTC m=+809.621362047" Jan 26 17:11:53 crc kubenswrapper[4856]: I0126 17:11:53.688239 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-bf765cf6c-gbst6" podStartSLOduration=3.830375178 podStartE2EDuration="45.68822016s" podCreationTimestamp="2026-01-26 17:11:08 +0000 UTC" firstStartedPulling="2026-01-26 17:11:11.570094657 +0000 UTC m=+767.523348638" lastFinishedPulling="2026-01-26 17:11:53.427939639 +0000 UTC m=+809.381193620" observedRunningTime="2026-01-26 17:11:53.682356513 +0000 UTC m=+809.635610514" watchObservedRunningTime="2026-01-26 17:11:53.68822016 +0000 UTC m=+809.641474141" Jan 26 17:11:53 crc kubenswrapper[4856]: I0126 17:11:53.706868 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-5bmfp" podStartSLOduration=1.3310320390000001 podStartE2EDuration="47.706849182s" podCreationTimestamp="2026-01-26 17:11:06 +0000 UTC" firstStartedPulling="2026-01-26 17:11:06.981620573 +0000 UTC m=+762.934874554" lastFinishedPulling="2026-01-26 17:11:53.357437726 +0000 UTC m=+809.310691697" observedRunningTime="2026-01-26 17:11:53.706012058 +0000 UTC m=+809.659266059" watchObservedRunningTime="2026-01-26 17:11:53.706849182 +0000 UTC m=+809.660103163" Jan 26 17:11:53 crc kubenswrapper[4856]: I0126 17:11:53.729923 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d9fdc4dd-cf7wn" podStartSLOduration=-9223371988.12487 podStartE2EDuration="48.7299051s" podCreationTimestamp="2026-01-26 17:11:05 +0000 UTC" firstStartedPulling="2026-01-26 17:11:06.865061054 +0000 UTC m=+762.818315035" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:11:53.726687488 +0000 UTC m=+809.679941479" watchObservedRunningTime="2026-01-26 17:11:53.7299051 +0000 UTC m=+809.683159081" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.692726 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.694589 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.707760 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-unicast-hosts" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.707773 4856 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-remote-ca" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.708279 4856 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-transport-certs" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.708567 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-scripts" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.709842 4856 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-config" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.710215 4856 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-xpack-file-realm" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.710447 4856 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-internal-users" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.710767 4856 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-dockercfg-5t26m" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.710864 4856 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-http-certs-internal" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.747722 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.784166 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.784222 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.784249 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.784290 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.784318 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.784335 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.784356 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.784387 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.784406 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.784421 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.784442 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.784466 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.784486 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.784547 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.784583 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.889249 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.889310 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.889344 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.889454 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.889517 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.889585 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.889611 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.889661 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.889694 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.889735 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.889782 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.889830 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.889866 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.889926 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.889973 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.895603 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.896631 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.901181 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.901230 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.901600 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.901935 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.903717 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.905326 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.905776 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.906130 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.906764 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.909190 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.909485 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.931127 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:55 crc kubenswrapper[4856]: I0126 17:11:55.936345 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/8cba7b0b-8fbc-4d94-a808-43c46e0defaa-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"8cba7b0b-8fbc-4d94-a808-43c46e0defaa\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:56 crc kubenswrapper[4856]: I0126 17:11:56.143946 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:11:56 crc kubenswrapper[4856]: I0126 17:11:56.563747 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 26 17:11:56 crc kubenswrapper[4856]: W0126 17:11:56.567231 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8cba7b0b_8fbc_4d94_a808_43c46e0defaa.slice/crio-db3e40a1186e23ef0cae8e0d0a1825b9e17257d4c36b7abfe5dd7998d36153e6 WatchSource:0}: Error finding container db3e40a1186e23ef0cae8e0d0a1825b9e17257d4c36b7abfe5dd7998d36153e6: Status 404 returned error can't find the container with id db3e40a1186e23ef0cae8e0d0a1825b9e17257d4c36b7abfe5dd7998d36153e6 Jan 26 17:11:56 crc kubenswrapper[4856]: I0126 17:11:56.738597 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"8cba7b0b-8fbc-4d94-a808-43c46e0defaa","Type":"ContainerStarted","Data":"db3e40a1186e23ef0cae8e0d0a1825b9e17257d4c36b7abfe5dd7998d36153e6"} Jan 26 17:11:56 crc kubenswrapper[4856]: I0126 17:11:56.952681 4856 patch_prober.go:28] interesting pod/machine-config-daemon-xm9cq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:11:56 crc kubenswrapper[4856]: I0126 17:11:56.952756 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:11:57 crc kubenswrapper[4856]: I0126 17:11:57.164025 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-www8b"] Jan 26 17:11:57 crc kubenswrapper[4856]: I0126 17:11:57.165073 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-www8b" Jan 26 17:11:57 crc kubenswrapper[4856]: I0126 17:11:57.168613 4856 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-r9d9k" Jan 26 17:11:57 crc kubenswrapper[4856]: I0126 17:11:57.169056 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 26 17:11:57 crc kubenswrapper[4856]: I0126 17:11:57.169339 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 26 17:11:57 crc kubenswrapper[4856]: I0126 17:11:57.176506 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-www8b"] Jan 26 17:11:57 crc kubenswrapper[4856]: I0126 17:11:57.289378 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e9288910-baf7-4cc4-b313-c87b80bfdd3e-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-www8b\" (UID: \"e9288910-baf7-4cc4-b313-c87b80bfdd3e\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-www8b" Jan 26 17:11:57 crc kubenswrapper[4856]: I0126 17:11:57.289539 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw4xs\" (UniqueName: \"kubernetes.io/projected/e9288910-baf7-4cc4-b313-c87b80bfdd3e-kube-api-access-pw4xs\") pod \"cert-manager-webhook-f4fb5df64-www8b\" (UID: \"e9288910-baf7-4cc4-b313-c87b80bfdd3e\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-www8b" Jan 26 17:11:57 crc kubenswrapper[4856]: I0126 17:11:57.390974 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e9288910-baf7-4cc4-b313-c87b80bfdd3e-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-www8b\" (UID: \"e9288910-baf7-4cc4-b313-c87b80bfdd3e\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-www8b" Jan 26 17:11:57 crc kubenswrapper[4856]: I0126 17:11:57.391059 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw4xs\" (UniqueName: \"kubernetes.io/projected/e9288910-baf7-4cc4-b313-c87b80bfdd3e-kube-api-access-pw4xs\") pod \"cert-manager-webhook-f4fb5df64-www8b\" (UID: \"e9288910-baf7-4cc4-b313-c87b80bfdd3e\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-www8b" Jan 26 17:11:57 crc kubenswrapper[4856]: I0126 17:11:57.414649 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e9288910-baf7-4cc4-b313-c87b80bfdd3e-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-www8b\" (UID: \"e9288910-baf7-4cc4-b313-c87b80bfdd3e\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-www8b" Jan 26 17:11:57 crc kubenswrapper[4856]: I0126 17:11:57.424416 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw4xs\" (UniqueName: \"kubernetes.io/projected/e9288910-baf7-4cc4-b313-c87b80bfdd3e-kube-api-access-pw4xs\") pod \"cert-manager-webhook-f4fb5df64-www8b\" (UID: \"e9288910-baf7-4cc4-b313-c87b80bfdd3e\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-www8b" Jan 26 17:11:57 crc kubenswrapper[4856]: I0126 17:11:57.499890 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-www8b" Jan 26 17:11:57 crc kubenswrapper[4856]: I0126 17:11:57.949963 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-www8b"] Jan 26 17:11:57 crc kubenswrapper[4856]: W0126 17:11:57.962563 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9288910_baf7_4cc4_b313_c87b80bfdd3e.slice/crio-ddd5416f5cb3a66e434992028c974aa69e0cdcf6a9d28626bf0068580e89a28b WatchSource:0}: Error finding container ddd5416f5cb3a66e434992028c974aa69e0cdcf6a9d28626bf0068580e89a28b: Status 404 returned error can't find the container with id ddd5416f5cb3a66e434992028c974aa69e0cdcf6a9d28626bf0068580e89a28b Jan 26 17:11:58 crc kubenswrapper[4856]: I0126 17:11:58.620597 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-rm9wd"] Jan 26 17:11:58 crc kubenswrapper[4856]: I0126 17:11:58.626694 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-rm9wd" Jan 26 17:11:58 crc kubenswrapper[4856]: I0126 17:11:58.629182 4856 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-4m8h6" Jan 26 17:11:58 crc kubenswrapper[4856]: I0126 17:11:58.630177 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-rm9wd"] Jan 26 17:11:58 crc kubenswrapper[4856]: I0126 17:11:58.809127 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-www8b" event={"ID":"e9288910-baf7-4cc4-b313-c87b80bfdd3e","Type":"ContainerStarted","Data":"ddd5416f5cb3a66e434992028c974aa69e0cdcf6a9d28626bf0068580e89a28b"} Jan 26 17:11:58 crc kubenswrapper[4856]: I0126 17:11:58.813507 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kflgh\" (UniqueName: \"kubernetes.io/projected/24a9d780-2b57-49d2-9cb9-eac2456ed86d-kube-api-access-kflgh\") pod \"cert-manager-cainjector-855d9ccff4-rm9wd\" (UID: \"24a9d780-2b57-49d2-9cb9-eac2456ed86d\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-rm9wd" Jan 26 17:11:58 crc kubenswrapper[4856]: I0126 17:11:58.813650 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/24a9d780-2b57-49d2-9cb9-eac2456ed86d-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-rm9wd\" (UID: \"24a9d780-2b57-49d2-9cb9-eac2456ed86d\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-rm9wd" Jan 26 17:11:58 crc kubenswrapper[4856]: I0126 17:11:58.915704 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/24a9d780-2b57-49d2-9cb9-eac2456ed86d-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-rm9wd\" (UID: \"24a9d780-2b57-49d2-9cb9-eac2456ed86d\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-rm9wd" Jan 26 17:11:58 crc kubenswrapper[4856]: I0126 17:11:58.915798 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kflgh\" (UniqueName: \"kubernetes.io/projected/24a9d780-2b57-49d2-9cb9-eac2456ed86d-kube-api-access-kflgh\") pod \"cert-manager-cainjector-855d9ccff4-rm9wd\" (UID: \"24a9d780-2b57-49d2-9cb9-eac2456ed86d\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-rm9wd" Jan 26 17:11:58 crc kubenswrapper[4856]: I0126 17:11:58.946302 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kflgh\" (UniqueName: \"kubernetes.io/projected/24a9d780-2b57-49d2-9cb9-eac2456ed86d-kube-api-access-kflgh\") pod \"cert-manager-cainjector-855d9ccff4-rm9wd\" (UID: \"24a9d780-2b57-49d2-9cb9-eac2456ed86d\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-rm9wd" Jan 26 17:11:58 crc kubenswrapper[4856]: I0126 17:11:58.987869 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/24a9d780-2b57-49d2-9cb9-eac2456ed86d-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-rm9wd\" (UID: \"24a9d780-2b57-49d2-9cb9-eac2456ed86d\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-rm9wd" Jan 26 17:11:59 crc kubenswrapper[4856]: I0126 17:11:59.244154 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-rm9wd" Jan 26 17:11:59 crc kubenswrapper[4856]: I0126 17:11:59.838165 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-k8bc4" event={"ID":"15358471-0b96-4792-aaaa-433823f9ed88","Type":"ContainerStarted","Data":"97dcbb99650087b040f160ca42a2bd8e63017c9cb21286cf505b461eae5754e5"} Jan 26 17:11:59 crc kubenswrapper[4856]: I0126 17:11:59.872921 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-5bb49f789d-k8bc4" podStartSLOduration=1.734664881 podStartE2EDuration="47.872899048s" podCreationTimestamp="2026-01-26 17:11:12 +0000 UTC" firstStartedPulling="2026-01-26 17:11:13.092200689 +0000 UTC m=+769.045454670" lastFinishedPulling="2026-01-26 17:11:59.230434856 +0000 UTC m=+815.183688837" observedRunningTime="2026-01-26 17:11:59.87017467 +0000 UTC m=+815.823428661" watchObservedRunningTime="2026-01-26 17:11:59.872899048 +0000 UTC m=+815.826153039" Jan 26 17:12:00 crc kubenswrapper[4856]: I0126 17:12:00.033493 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-rm9wd"] Jan 26 17:12:00 crc kubenswrapper[4856]: I0126 17:12:00.846772 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-rm9wd" event={"ID":"24a9d780-2b57-49d2-9cb9-eac2456ed86d","Type":"ContainerStarted","Data":"e6ff1bea74c7a653ac5b07ac4eaf01f57fa00dc9d26acd07d07c7c3048e0d12f"} Jan 26 17:12:06 crc kubenswrapper[4856]: I0126 17:12:06.695273 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-5bmfp" Jan 26 17:12:16 crc kubenswrapper[4856]: I0126 17:12:16.409793 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-xcqr4"] Jan 26 17:12:16 crc kubenswrapper[4856]: I0126 17:12:16.411833 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-xcqr4" Jan 26 17:12:16 crc kubenswrapper[4856]: I0126 17:12:16.415603 4856 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-gxmx7" Jan 26 17:12:16 crc kubenswrapper[4856]: I0126 17:12:16.493615 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-xcqr4"] Jan 26 17:12:16 crc kubenswrapper[4856]: I0126 17:12:16.513505 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92jf9\" (UniqueName: \"kubernetes.io/projected/3dc10d6b-aa48-4c7d-afab-45fa62298819-kube-api-access-92jf9\") pod \"cert-manager-86cb77c54b-xcqr4\" (UID: \"3dc10d6b-aa48-4c7d-afab-45fa62298819\") " pod="cert-manager/cert-manager-86cb77c54b-xcqr4" Jan 26 17:12:16 crc kubenswrapper[4856]: I0126 17:12:16.513597 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3dc10d6b-aa48-4c7d-afab-45fa62298819-bound-sa-token\") pod \"cert-manager-86cb77c54b-xcqr4\" (UID: \"3dc10d6b-aa48-4c7d-afab-45fa62298819\") " pod="cert-manager/cert-manager-86cb77c54b-xcqr4" Jan 26 17:12:16 crc kubenswrapper[4856]: I0126 17:12:16.615642 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92jf9\" (UniqueName: \"kubernetes.io/projected/3dc10d6b-aa48-4c7d-afab-45fa62298819-kube-api-access-92jf9\") pod \"cert-manager-86cb77c54b-xcqr4\" (UID: \"3dc10d6b-aa48-4c7d-afab-45fa62298819\") " pod="cert-manager/cert-manager-86cb77c54b-xcqr4" Jan 26 17:12:16 crc kubenswrapper[4856]: I0126 17:12:16.616015 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3dc10d6b-aa48-4c7d-afab-45fa62298819-bound-sa-token\") pod \"cert-manager-86cb77c54b-xcqr4\" (UID: \"3dc10d6b-aa48-4c7d-afab-45fa62298819\") " pod="cert-manager/cert-manager-86cb77c54b-xcqr4" Jan 26 17:12:16 crc kubenswrapper[4856]: I0126 17:12:16.635516 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3dc10d6b-aa48-4c7d-afab-45fa62298819-bound-sa-token\") pod \"cert-manager-86cb77c54b-xcqr4\" (UID: \"3dc10d6b-aa48-4c7d-afab-45fa62298819\") " pod="cert-manager/cert-manager-86cb77c54b-xcqr4" Jan 26 17:12:16 crc kubenswrapper[4856]: I0126 17:12:16.635585 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92jf9\" (UniqueName: \"kubernetes.io/projected/3dc10d6b-aa48-4c7d-afab-45fa62298819-kube-api-access-92jf9\") pod \"cert-manager-86cb77c54b-xcqr4\" (UID: \"3dc10d6b-aa48-4c7d-afab-45fa62298819\") " pod="cert-manager/cert-manager-86cb77c54b-xcqr4" Jan 26 17:12:16 crc kubenswrapper[4856]: I0126 17:12:16.744853 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-xcqr4" Jan 26 17:12:17 crc kubenswrapper[4856]: E0126 17:12:17.436279 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df" Jan 26 17:12:17 crc kubenswrapper[4856]: E0126 17:12:17.436606 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cert-manager-webhook,Image:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df,Command:[/app/cmd/webhook/webhook],Args:[--dynamic-serving-ca-secret-name=cert-manager-webhook-ca --dynamic-serving-ca-secret-namespace=$(POD_NAMESPACE) --dynamic-serving-dns-names=cert-manager-webhook,cert-manager-webhook.$(POD_NAMESPACE),cert-manager-webhook.$(POD_NAMESPACE).svc --secure-port=10250 --v=2],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:10250,Protocol:TCP,HostIP:,},ContainerPort{Name:healthcheck,HostPort:0,ContainerPort:6080,Protocol:TCP,HostIP:,},ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:9402,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bound-sa-token,ReadOnly:true,MountPath:/var/run/secrets/openshift/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pw4xs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 healthcheck},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{1 0 healthcheck},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000690000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cert-manager-webhook-f4fb5df64-www8b_cert-manager(e9288910-baf7-4cc4-b313-c87b80bfdd3e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 17:12:17 crc kubenswrapper[4856]: E0126 17:12:17.437794 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="cert-manager/cert-manager-webhook-f4fb5df64-www8b" podUID="e9288910-baf7-4cc4-b313-c87b80bfdd3e" Jan 26 17:12:18 crc kubenswrapper[4856]: E0126 17:12:18.349107 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df\\\"\"" pod="cert-manager/cert-manager-webhook-f4fb5df64-www8b" podUID="e9288910-baf7-4cc4-b313-c87b80bfdd3e" Jan 26 17:12:26 crc kubenswrapper[4856]: E0126 17:12:26.720227 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="registry.connect.redhat.com/elastic/elasticsearch:7.17.20" Jan 26 17:12:26 crc kubenswrapper[4856]: E0126 17:12:26.721032 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:elastic-internal-init-filesystem,Image:registry.connect.redhat.com/elastic/elasticsearch:7.17.20,Command:[bash -c /mnt/elastic-internal/scripts/prepare-fs.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:HEADLESS_SERVICE_NAME,Value:elasticsearch-es-default,ValueFrom:nil,},EnvVar{Name:PROBE_PASSWORD_PATH,Value:/mnt/elastic-internal/pod-mounted-users/elastic-internal-probe,ValueFrom:nil,},EnvVar{Name:PROBE_USERNAME,Value:elastic-internal-probe,ValueFrom:nil,},EnvVar{Name:READINESS_PROBE_PROTOCOL,Value:https,ValueFrom:nil,},EnvVar{Name:NSS_SDB_USE_CACHE,Value:no,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:downward-api,ReadOnly:true,MountPath:/mnt/elastic-internal/downward-api,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-bin-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-bin-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-config,ReadOnly:true,MountPath:/mnt/elastic-internal/elasticsearch-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-config-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-config-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-plugins-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-plugins-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-http-certificates,ReadOnly:true,MountPath:/usr/share/elasticsearch/config/http-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-probe-user,ReadOnly:true,MountPath:/mnt/elastic-internal/pod-mounted-users,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-remote-certificate-authorities,ReadOnly:true,MountPath:/usr/share/elasticsearch/config/transport-remote-certs/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-scripts,ReadOnly:true,MountPath:/mnt/elastic-internal/scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-transport-certificates,ReadOnly:true,MountPath:/mnt/elastic-internal/transport-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-unicast-hosts,ReadOnly:true,MountPath:/mnt/elastic-internal/unicast-hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-xpack-file-realm,ReadOnly:true,MountPath:/mnt/elastic-internal/xpack-file-realm,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elasticsearch-data,ReadOnly:false,MountPath:/usr/share/elasticsearch/data,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elasticsearch-logs,ReadOnly:false,MountPath:/usr/share/elasticsearch/logs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod elasticsearch-es-default-0_service-telemetry(8cba7b0b-8fbc-4d94-a808-43c46e0defaa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 17:12:26 crc kubenswrapper[4856]: E0126 17:12:26.722765 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="8cba7b0b-8fbc-4d94-a808-43c46e0defaa" Jan 26 17:12:26 crc kubenswrapper[4856]: I0126 17:12:26.767940 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-xcqr4"] Jan 26 17:12:27 crc kubenswrapper[4856]: I0126 17:12:26.939049 4856 patch_prober.go:28] interesting pod/machine-config-daemon-xm9cq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:12:27 crc kubenswrapper[4856]: I0126 17:12:26.939143 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:12:27 crc kubenswrapper[4856]: I0126 17:12:27.591846 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-xcqr4" event={"ID":"3dc10d6b-aa48-4c7d-afab-45fa62298819","Type":"ContainerStarted","Data":"01718ae2682246101d0243d503913e773a0a7428a8847cd99c245f99820c6b31"} Jan 26 17:12:27 crc kubenswrapper[4856]: I0126 17:12:27.592136 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-xcqr4" event={"ID":"3dc10d6b-aa48-4c7d-afab-45fa62298819","Type":"ContainerStarted","Data":"b3ceac6859cf4cde100510b0215cca6038a853f9300b66b4346aa458836b853a"} Jan 26 17:12:27 crc kubenswrapper[4856]: I0126 17:12:27.594579 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-rm9wd" event={"ID":"24a9d780-2b57-49d2-9cb9-eac2456ed86d","Type":"ContainerStarted","Data":"c1544f4bd1c240b53131c6d9efe66c2aa6b12f5da34a643f11683134b5ae43a4"} Jan 26 17:12:27 crc kubenswrapper[4856]: E0126 17:12:27.595205 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="8cba7b0b-8fbc-4d94-a808-43c46e0defaa" Jan 26 17:12:27 crc kubenswrapper[4856]: I0126 17:12:27.673916 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-xcqr4" podStartSLOduration=11.67387249 podStartE2EDuration="11.67387249s" podCreationTimestamp="2026-01-26 17:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:12:27.611896564 +0000 UTC m=+843.565150555" watchObservedRunningTime="2026-01-26 17:12:27.67387249 +0000 UTC m=+843.627126471" Jan 26 17:12:27 crc kubenswrapper[4856]: I0126 17:12:27.678910 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-rm9wd" podStartSLOduration=3.383466221 podStartE2EDuration="29.678889547s" podCreationTimestamp="2026-01-26 17:11:58 +0000 UTC" firstStartedPulling="2026-01-26 17:12:00.052535527 +0000 UTC m=+816.005789508" lastFinishedPulling="2026-01-26 17:12:26.347958853 +0000 UTC m=+842.301212834" observedRunningTime="2026-01-26 17:12:27.678540267 +0000 UTC m=+843.631794268" watchObservedRunningTime="2026-01-26 17:12:27.678889547 +0000 UTC m=+843.632143548" Jan 26 17:12:27 crc kubenswrapper[4856]: I0126 17:12:27.746719 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 26 17:12:27 crc kubenswrapper[4856]: I0126 17:12:27.782258 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 26 17:12:28 crc kubenswrapper[4856]: E0126 17:12:28.606401 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="8cba7b0b-8fbc-4d94-a808-43c46e0defaa" Jan 26 17:12:29 crc kubenswrapper[4856]: E0126 17:12:29.610130 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="8cba7b0b-8fbc-4d94-a808-43c46e0defaa" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.307855 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.309074 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.311320 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-sys-config" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.311320 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-ca" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.311321 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-global-ca" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.312296 4856 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-8h4xs" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.439139 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.439191 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.439228 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.440682 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.440713 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.440752 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsm2b\" (UniqueName: \"kubernetes.io/projected/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-kube-api-access-gsm2b\") pod \"service-telemetry-operator-1-build\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.440779 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.440796 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.440810 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-builder-dockercfg-8h4xs-push\") pod \"service-telemetry-operator-1-build\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.440842 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.440864 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-builder-dockercfg-8h4xs-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.440884 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.459968 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.541654 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-builder-dockercfg-8h4xs-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.541706 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.541733 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.541757 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.541791 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.541865 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.541892 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.541930 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsm2b\" (UniqueName: \"kubernetes.io/projected/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-kube-api-access-gsm2b\") pod \"service-telemetry-operator-1-build\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.541975 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.541998 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.542016 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-builder-dockercfg-8h4xs-push\") pod \"service-telemetry-operator-1-build\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.542050 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.542716 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.542812 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.542904 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.542928 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.543042 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.543243 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.543249 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.543428 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.543794 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.548217 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-builder-dockercfg-8h4xs-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.548290 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-builder-dockercfg-8h4xs-push\") pod \"service-telemetry-operator-1-build\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.563043 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsm2b\" (UniqueName: \"kubernetes.io/projected/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-kube-api-access-gsm2b\") pod \"service-telemetry-operator-1-build\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 17:12:31 crc kubenswrapper[4856]: I0126 17:12:31.744337 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 17:12:32 crc kubenswrapper[4856]: I0126 17:12:32.240443 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 26 17:12:32 crc kubenswrapper[4856]: I0126 17:12:32.629160 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"c90021ae-76d4-45c7-a3b8-04e3b9bbddce","Type":"ContainerStarted","Data":"dd686a65c0eba5b54c5e815a6d4137b24d38f89f6901d1d1e3907daf3f0a3f46"} Jan 26 17:12:33 crc kubenswrapper[4856]: I0126 17:12:33.637315 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-www8b" event={"ID":"e9288910-baf7-4cc4-b313-c87b80bfdd3e","Type":"ContainerStarted","Data":"772a0c9d1eb894fb3a5347dc5ee0598386b020c26e7d289fcc87645f6477eef8"} Jan 26 17:12:33 crc kubenswrapper[4856]: I0126 17:12:33.638781 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-www8b" Jan 26 17:12:33 crc kubenswrapper[4856]: I0126 17:12:33.668055 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-www8b" podStartSLOduration=-9223372000.186808 podStartE2EDuration="36.667968226s" podCreationTimestamp="2026-01-26 17:11:57 +0000 UTC" firstStartedPulling="2026-01-26 17:11:57.965107341 +0000 UTC m=+813.918361322" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:12:33.662622889 +0000 UTC m=+849.615876960" watchObservedRunningTime="2026-01-26 17:12:33.667968226 +0000 UTC m=+849.621222207" Jan 26 17:12:39 crc kubenswrapper[4856]: I0126 17:12:39.677666 4856 generic.go:334] "Generic (PLEG): container finished" podID="c90021ae-76d4-45c7-a3b8-04e3b9bbddce" containerID="28aff785d4bd8d892e6c32fa63d73c371a6f2a38a3d0f6ac418a977bacaecdd0" exitCode=0 Jan 26 17:12:39 crc kubenswrapper[4856]: I0126 17:12:39.677782 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"c90021ae-76d4-45c7-a3b8-04e3b9bbddce","Type":"ContainerDied","Data":"28aff785d4bd8d892e6c32fa63d73c371a6f2a38a3d0f6ac418a977bacaecdd0"} Jan 26 17:12:40 crc kubenswrapper[4856]: I0126 17:12:40.687690 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"c90021ae-76d4-45c7-a3b8-04e3b9bbddce","Type":"ContainerStarted","Data":"2800b2e70dad7de0c38ad3d8f25f147fcc58b2b450a62480a1e7bc1db9736970"} Jan 26 17:12:40 crc kubenswrapper[4856]: I0126 17:12:40.714286 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-1-build" podStartSLOduration=3.3986747250000002 podStartE2EDuration="9.714259875s" podCreationTimestamp="2026-01-26 17:12:31 +0000 UTC" firstStartedPulling="2026-01-26 17:12:32.249817081 +0000 UTC m=+848.203071062" lastFinishedPulling="2026-01-26 17:12:38.565402221 +0000 UTC m=+854.518656212" observedRunningTime="2026-01-26 17:12:40.707214282 +0000 UTC m=+856.660468273" watchObservedRunningTime="2026-01-26 17:12:40.714259875 +0000 UTC m=+856.667513856" Jan 26 17:12:41 crc kubenswrapper[4856]: I0126 17:12:41.457551 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 26 17:12:42 crc kubenswrapper[4856]: I0126 17:12:42.503304 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-www8b" Jan 26 17:12:42 crc kubenswrapper[4856]: I0126 17:12:42.701813 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-1-build" podUID="c90021ae-76d4-45c7-a3b8-04e3b9bbddce" containerName="docker-build" containerID="cri-o://2800b2e70dad7de0c38ad3d8f25f147fcc58b2b450a62480a1e7bc1db9736970" gracePeriod=30 Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.069992 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.071998 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.074264 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-global-ca" Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.074319 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-ca" Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.074276 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-sys-config" Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.100655 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.104290 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.104348 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.104371 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.104391 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.104411 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.104442 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-builder-dockercfg-8h4xs-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.104471 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.104549 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-builder-dockercfg-8h4xs-push\") pod \"service-telemetry-operator-2-build\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.104573 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.104602 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spxgc\" (UniqueName: \"kubernetes.io/projected/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-kube-api-access-spxgc\") pod \"service-telemetry-operator-2-build\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.104646 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.104698 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.205752 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.205843 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.205908 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.205950 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.205984 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.206012 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.206043 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.206074 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-builder-dockercfg-8h4xs-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.206132 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.206210 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-builder-dockercfg-8h4xs-push\") pod \"service-telemetry-operator-2-build\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.206271 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.206312 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spxgc\" (UniqueName: \"kubernetes.io/projected/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-kube-api-access-spxgc\") pod \"service-telemetry-operator-2-build\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.206600 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.206653 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.206960 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.248336 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.248389 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.249778 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.249871 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.250263 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.250611 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.254670 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-builder-dockercfg-8h4xs-push\") pod \"service-telemetry-operator-2-build\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.254863 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-builder-dockercfg-8h4xs-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.257641 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spxgc\" (UniqueName: \"kubernetes.io/projected/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-kube-api-access-spxgc\") pod \"service-telemetry-operator-2-build\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 17:12:43 crc kubenswrapper[4856]: I0126 17:12:43.395545 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 17:12:46 crc kubenswrapper[4856]: I0126 17:12:46.258660 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 26 17:12:46 crc kubenswrapper[4856]: I0126 17:12:46.729009 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"e3f6dcf4-c152-4a81-8e1d-1fdf469be581","Type":"ContainerStarted","Data":"f38fe849edfe940888da0c7e9589bf8433e33392b1573a13a0d673b63831ce2b"} Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.383783 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_c90021ae-76d4-45c7-a3b8-04e3b9bbddce/docker-build/0.log" Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.384809 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.429240 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_c90021ae-76d4-45c7-a3b8-04e3b9bbddce/docker-build/0.log" Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.433721 4856 generic.go:334] "Generic (PLEG): container finished" podID="c90021ae-76d4-45c7-a3b8-04e3b9bbddce" containerID="2800b2e70dad7de0c38ad3d8f25f147fcc58b2b450a62480a1e7bc1db9736970" exitCode=1 Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.433767 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"c90021ae-76d4-45c7-a3b8-04e3b9bbddce","Type":"ContainerDied","Data":"2800b2e70dad7de0c38ad3d8f25f147fcc58b2b450a62480a1e7bc1db9736970"} Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.433811 4856 scope.go:117] "RemoveContainer" containerID="2800b2e70dad7de0c38ad3d8f25f147fcc58b2b450a62480a1e7bc1db9736970" Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.453363 4856 scope.go:117] "RemoveContainer" containerID="28aff785d4bd8d892e6c32fa63d73c371a6f2a38a3d0f6ac418a977bacaecdd0" Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.494998 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-builder-dockercfg-8h4xs-pull\") pod \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.495111 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-container-storage-run\") pod \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.495131 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-builder-dockercfg-8h4xs-push\") pod \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.495154 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-container-storage-root\") pod \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.495198 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-build-system-configs\") pod \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.495217 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-buildworkdir\") pod \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.495252 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-buildcachedir\") pod \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.495268 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-build-ca-bundles\") pod \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.495315 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsm2b\" (UniqueName: \"kubernetes.io/projected/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-kube-api-access-gsm2b\") pod \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.495350 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-build-blob-cache\") pod \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.495364 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-node-pullsecrets\") pod \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.495418 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-build-proxy-ca-bundles\") pod \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\" (UID: \"c90021ae-76d4-45c7-a3b8-04e3b9bbddce\") " Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.495591 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "c90021ae-76d4-45c7-a3b8-04e3b9bbddce" (UID: "c90021ae-76d4-45c7-a3b8-04e3b9bbddce"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.496008 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "c90021ae-76d4-45c7-a3b8-04e3b9bbddce" (UID: "c90021ae-76d4-45c7-a3b8-04e3b9bbddce"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.496039 4856 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.496154 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "c90021ae-76d4-45c7-a3b8-04e3b9bbddce" (UID: "c90021ae-76d4-45c7-a3b8-04e3b9bbddce"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.496180 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "c90021ae-76d4-45c7-a3b8-04e3b9bbddce" (UID: "c90021ae-76d4-45c7-a3b8-04e3b9bbddce"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.496457 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "c90021ae-76d4-45c7-a3b8-04e3b9bbddce" (UID: "c90021ae-76d4-45c7-a3b8-04e3b9bbddce"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.496874 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "c90021ae-76d4-45c7-a3b8-04e3b9bbddce" (UID: "c90021ae-76d4-45c7-a3b8-04e3b9bbddce"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.497334 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "c90021ae-76d4-45c7-a3b8-04e3b9bbddce" (UID: "c90021ae-76d4-45c7-a3b8-04e3b9bbddce"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.497394 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "c90021ae-76d4-45c7-a3b8-04e3b9bbddce" (UID: "c90021ae-76d4-45c7-a3b8-04e3b9bbddce"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.497733 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "c90021ae-76d4-45c7-a3b8-04e3b9bbddce" (UID: "c90021ae-76d4-45c7-a3b8-04e3b9bbddce"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.500032 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-builder-dockercfg-8h4xs-pull" (OuterVolumeSpecName: "builder-dockercfg-8h4xs-pull") pod "c90021ae-76d4-45c7-a3b8-04e3b9bbddce" (UID: "c90021ae-76d4-45c7-a3b8-04e3b9bbddce"). InnerVolumeSpecName "builder-dockercfg-8h4xs-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.500055 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-builder-dockercfg-8h4xs-push" (OuterVolumeSpecName: "builder-dockercfg-8h4xs-push") pod "c90021ae-76d4-45c7-a3b8-04e3b9bbddce" (UID: "c90021ae-76d4-45c7-a3b8-04e3b9bbddce"). InnerVolumeSpecName "builder-dockercfg-8h4xs-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.500436 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-kube-api-access-gsm2b" (OuterVolumeSpecName: "kube-api-access-gsm2b") pod "c90021ae-76d4-45c7-a3b8-04e3b9bbddce" (UID: "c90021ae-76d4-45c7-a3b8-04e3b9bbddce"). InnerVolumeSpecName "kube-api-access-gsm2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.597474 4856 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.597515 4856 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-builder-dockercfg-8h4xs-push\") on node \"crc\" DevicePath \"\"" Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.597574 4856 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.597587 4856 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.597596 4856 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.597604 4856 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.597616 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsm2b\" (UniqueName: \"kubernetes.io/projected/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-kube-api-access-gsm2b\") on node \"crc\" DevicePath \"\"" Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.597624 4856 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.597631 4856 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.597638 4856 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 17:12:49 crc kubenswrapper[4856]: I0126 17:12:49.597646 4856 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/c90021ae-76d4-45c7-a3b8-04e3b9bbddce-builder-dockercfg-8h4xs-pull\") on node \"crc\" DevicePath \"\"" Jan 26 17:12:50 crc kubenswrapper[4856]: I0126 17:12:50.442154 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"c90021ae-76d4-45c7-a3b8-04e3b9bbddce","Type":"ContainerDied","Data":"dd686a65c0eba5b54c5e815a6d4137b24d38f89f6901d1d1e3907daf3f0a3f46"} Jan 26 17:12:50 crc kubenswrapper[4856]: I0126 17:12:50.442178 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 17:12:50 crc kubenswrapper[4856]: I0126 17:12:50.447096 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"e3f6dcf4-c152-4a81-8e1d-1fdf469be581","Type":"ContainerStarted","Data":"afbb877df6ecee92e6604885fb086f3aa9571ea00f867e7fec22ca0c0785ba91"} Jan 26 17:12:50 crc kubenswrapper[4856]: I0126 17:12:50.448080 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"8cba7b0b-8fbc-4d94-a808-43c46e0defaa","Type":"ContainerStarted","Data":"58cea0944e41ac391efcc221b5257ed0cbfcc9d9d995c19c749b68acf88653a0"} Jan 26 17:12:50 crc kubenswrapper[4856]: I0126 17:12:50.535801 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 26 17:12:50 crc kubenswrapper[4856]: I0126 17:12:50.547482 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 26 17:12:51 crc kubenswrapper[4856]: I0126 17:12:51.404856 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c90021ae-76d4-45c7-a3b8-04e3b9bbddce" path="/var/lib/kubelet/pods/c90021ae-76d4-45c7-a3b8-04e3b9bbddce/volumes" Jan 26 17:12:51 crc kubenswrapper[4856]: I0126 17:12:51.458217 4856 generic.go:334] "Generic (PLEG): container finished" podID="8cba7b0b-8fbc-4d94-a808-43c46e0defaa" containerID="58cea0944e41ac391efcc221b5257ed0cbfcc9d9d995c19c749b68acf88653a0" exitCode=0 Jan 26 17:12:51 crc kubenswrapper[4856]: I0126 17:12:51.458626 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"8cba7b0b-8fbc-4d94-a808-43c46e0defaa","Type":"ContainerDied","Data":"58cea0944e41ac391efcc221b5257ed0cbfcc9d9d995c19c749b68acf88653a0"} Jan 26 17:12:52 crc kubenswrapper[4856]: I0126 17:12:52.464885 4856 generic.go:334] "Generic (PLEG): container finished" podID="8cba7b0b-8fbc-4d94-a808-43c46e0defaa" containerID="afc0c6940e36d6454494111916ed37a8c0609122aca51e5a53f48c9728aefbc7" exitCode=0 Jan 26 17:12:52 crc kubenswrapper[4856]: I0126 17:12:52.464946 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"8cba7b0b-8fbc-4d94-a808-43c46e0defaa","Type":"ContainerDied","Data":"afc0c6940e36d6454494111916ed37a8c0609122aca51e5a53f48c9728aefbc7"} Jan 26 17:12:53 crc kubenswrapper[4856]: I0126 17:12:53.474829 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"8cba7b0b-8fbc-4d94-a808-43c46e0defaa","Type":"ContainerStarted","Data":"4a31e616d23d9c76745e34247cf9b597f7d5d4019e0803d40e84b05d4263e2a0"} Jan 26 17:12:53 crc kubenswrapper[4856]: I0126 17:12:53.475556 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:12:53 crc kubenswrapper[4856]: I0126 17:12:53.509977 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=5.905538427 podStartE2EDuration="58.509936588s" podCreationTimestamp="2026-01-26 17:11:55 +0000 UTC" firstStartedPulling="2026-01-26 17:11:56.569410626 +0000 UTC m=+812.522664607" lastFinishedPulling="2026-01-26 17:12:49.173808787 +0000 UTC m=+865.127062768" observedRunningTime="2026-01-26 17:12:53.503881982 +0000 UTC m=+869.457135963" watchObservedRunningTime="2026-01-26 17:12:53.509936588 +0000 UTC m=+869.463190569" Jan 26 17:12:56 crc kubenswrapper[4856]: I0126 17:12:56.938473 4856 patch_prober.go:28] interesting pod/machine-config-daemon-xm9cq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:12:56 crc kubenswrapper[4856]: I0126 17:12:56.939090 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:12:56 crc kubenswrapper[4856]: I0126 17:12:56.939183 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" Jan 26 17:12:56 crc kubenswrapper[4856]: I0126 17:12:56.940889 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bb3fb578d0ea2b4eb264b402043faa4d1923f5d38749a2ee2c65b084c2e291bd"} pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 17:12:56 crc kubenswrapper[4856]: I0126 17:12:56.940980 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" containerName="machine-config-daemon" containerID="cri-o://bb3fb578d0ea2b4eb264b402043faa4d1923f5d38749a2ee2c65b084c2e291bd" gracePeriod=600 Jan 26 17:12:57 crc kubenswrapper[4856]: I0126 17:12:57.559395 4856 generic.go:334] "Generic (PLEG): container finished" podID="63c75ede-5170-4db0-811b-5217ef8d72b3" containerID="bb3fb578d0ea2b4eb264b402043faa4d1923f5d38749a2ee2c65b084c2e291bd" exitCode=0 Jan 26 17:12:57 crc kubenswrapper[4856]: I0126 17:12:57.559473 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" event={"ID":"63c75ede-5170-4db0-811b-5217ef8d72b3","Type":"ContainerDied","Data":"bb3fb578d0ea2b4eb264b402043faa4d1923f5d38749a2ee2c65b084c2e291bd"} Jan 26 17:12:57 crc kubenswrapper[4856]: I0126 17:12:57.559840 4856 scope.go:117] "RemoveContainer" containerID="fe42c0299ac9f35a2260caaf7226f7e2161da013442117dab0d25a7c69c46115" Jan 26 17:12:58 crc kubenswrapper[4856]: I0126 17:12:58.579969 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" event={"ID":"63c75ede-5170-4db0-811b-5217ef8d72b3","Type":"ContainerStarted","Data":"fdaad4602089daad40b0395fbc761e615a8ba2a94c8f5b977142a787034cddb7"} Jan 26 17:12:59 crc kubenswrapper[4856]: I0126 17:12:59.587666 4856 generic.go:334] "Generic (PLEG): container finished" podID="e3f6dcf4-c152-4a81-8e1d-1fdf469be581" containerID="afbb877df6ecee92e6604885fb086f3aa9571ea00f867e7fec22ca0c0785ba91" exitCode=0 Jan 26 17:12:59 crc kubenswrapper[4856]: I0126 17:12:59.587738 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"e3f6dcf4-c152-4a81-8e1d-1fdf469be581","Type":"ContainerDied","Data":"afbb877df6ecee92e6604885fb086f3aa9571ea00f867e7fec22ca0c0785ba91"} Jan 26 17:13:00 crc kubenswrapper[4856]: I0126 17:13:00.595694 4856 generic.go:334] "Generic (PLEG): container finished" podID="e3f6dcf4-c152-4a81-8e1d-1fdf469be581" containerID="8e37eff03e3f55d608de033572142249146d18f2f61d278dcf3a02aa6f6ff2ab" exitCode=0 Jan 26 17:13:00 crc kubenswrapper[4856]: I0126 17:13:00.595762 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"e3f6dcf4-c152-4a81-8e1d-1fdf469be581","Type":"ContainerDied","Data":"8e37eff03e3f55d608de033572142249146d18f2f61d278dcf3a02aa6f6ff2ab"} Jan 26 17:13:00 crc kubenswrapper[4856]: I0126 17:13:00.642007 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_e3f6dcf4-c152-4a81-8e1d-1fdf469be581/manage-dockerfile/0.log" Jan 26 17:13:01 crc kubenswrapper[4856]: I0126 17:13:01.673673 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"e3f6dcf4-c152-4a81-8e1d-1fdf469be581","Type":"ContainerStarted","Data":"acf8873d5a9fd2dc945aa7f942f92399d79aa34d23d46be85cf69d51f18751c1"} Jan 26 17:13:01 crc kubenswrapper[4856]: I0126 17:13:01.713558 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-2-build" podStartSLOduration=18.713515358 podStartE2EDuration="18.713515358s" podCreationTimestamp="2026-01-26 17:12:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:13:01.705860629 +0000 UTC m=+877.659114620" watchObservedRunningTime="2026-01-26 17:13:01.713515358 +0000 UTC m=+877.666769339" Jan 26 17:13:06 crc kubenswrapper[4856]: I0126 17:13:06.423186 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="8cba7b0b-8fbc-4d94-a808-43c46e0defaa" containerName="elasticsearch" probeResult="failure" output=< Jan 26 17:13:06 crc kubenswrapper[4856]: {"timestamp": "2026-01-26T17:13:06+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 26 17:13:06 crc kubenswrapper[4856]: > Jan 26 17:13:11 crc kubenswrapper[4856]: I0126 17:13:11.252487 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="8cba7b0b-8fbc-4d94-a808-43c46e0defaa" containerName="elasticsearch" probeResult="failure" output=< Jan 26 17:13:11 crc kubenswrapper[4856]: {"timestamp": "2026-01-26T17:13:11+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 26 17:13:11 crc kubenswrapper[4856]: > Jan 26 17:13:16 crc kubenswrapper[4856]: I0126 17:13:16.373422 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="8cba7b0b-8fbc-4d94-a808-43c46e0defaa" containerName="elasticsearch" probeResult="failure" output=< Jan 26 17:13:16 crc kubenswrapper[4856]: {"timestamp": "2026-01-26T17:13:16+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 26 17:13:16 crc kubenswrapper[4856]: > Jan 26 17:13:21 crc kubenswrapper[4856]: I0126 17:13:21.233422 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="8cba7b0b-8fbc-4d94-a808-43c46e0defaa" containerName="elasticsearch" probeResult="failure" output=< Jan 26 17:13:21 crc kubenswrapper[4856]: {"timestamp": "2026-01-26T17:13:21+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 26 17:13:21 crc kubenswrapper[4856]: > Jan 26 17:13:26 crc kubenswrapper[4856]: I0126 17:13:26.344951 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Jan 26 17:13:36 crc kubenswrapper[4856]: I0126 17:13:36.978680 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2sbfj"] Jan 26 17:13:36 crc kubenswrapper[4856]: E0126 17:13:36.979476 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c90021ae-76d4-45c7-a3b8-04e3b9bbddce" containerName="docker-build" Jan 26 17:13:36 crc kubenswrapper[4856]: I0126 17:13:36.979494 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="c90021ae-76d4-45c7-a3b8-04e3b9bbddce" containerName="docker-build" Jan 26 17:13:36 crc kubenswrapper[4856]: E0126 17:13:36.979504 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c90021ae-76d4-45c7-a3b8-04e3b9bbddce" containerName="manage-dockerfile" Jan 26 17:13:36 crc kubenswrapper[4856]: I0126 17:13:36.979510 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="c90021ae-76d4-45c7-a3b8-04e3b9bbddce" containerName="manage-dockerfile" Jan 26 17:13:36 crc kubenswrapper[4856]: I0126 17:13:36.979694 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="c90021ae-76d4-45c7-a3b8-04e3b9bbddce" containerName="docker-build" Jan 26 17:13:36 crc kubenswrapper[4856]: I0126 17:13:36.980805 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2sbfj" Jan 26 17:13:36 crc kubenswrapper[4856]: I0126 17:13:36.997263 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2sbfj"] Jan 26 17:13:37 crc kubenswrapper[4856]: I0126 17:13:37.038954 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40ded4e9-2f52-405d-80fb-b4fef311cbc1-catalog-content\") pod \"redhat-operators-2sbfj\" (UID: \"40ded4e9-2f52-405d-80fb-b4fef311cbc1\") " pod="openshift-marketplace/redhat-operators-2sbfj" Jan 26 17:13:37 crc kubenswrapper[4856]: I0126 17:13:37.039182 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40ded4e9-2f52-405d-80fb-b4fef311cbc1-utilities\") pod \"redhat-operators-2sbfj\" (UID: \"40ded4e9-2f52-405d-80fb-b4fef311cbc1\") " pod="openshift-marketplace/redhat-operators-2sbfj" Jan 26 17:13:37 crc kubenswrapper[4856]: I0126 17:13:37.039272 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl4bs\" (UniqueName: \"kubernetes.io/projected/40ded4e9-2f52-405d-80fb-b4fef311cbc1-kube-api-access-gl4bs\") pod \"redhat-operators-2sbfj\" (UID: \"40ded4e9-2f52-405d-80fb-b4fef311cbc1\") " pod="openshift-marketplace/redhat-operators-2sbfj" Jan 26 17:13:37 crc kubenswrapper[4856]: I0126 17:13:37.140137 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40ded4e9-2f52-405d-80fb-b4fef311cbc1-utilities\") pod \"redhat-operators-2sbfj\" (UID: \"40ded4e9-2f52-405d-80fb-b4fef311cbc1\") " pod="openshift-marketplace/redhat-operators-2sbfj" Jan 26 17:13:37 crc kubenswrapper[4856]: I0126 17:13:37.140222 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl4bs\" (UniqueName: \"kubernetes.io/projected/40ded4e9-2f52-405d-80fb-b4fef311cbc1-kube-api-access-gl4bs\") pod \"redhat-operators-2sbfj\" (UID: \"40ded4e9-2f52-405d-80fb-b4fef311cbc1\") " pod="openshift-marketplace/redhat-operators-2sbfj" Jan 26 17:13:37 crc kubenswrapper[4856]: I0126 17:13:37.140270 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40ded4e9-2f52-405d-80fb-b4fef311cbc1-catalog-content\") pod \"redhat-operators-2sbfj\" (UID: \"40ded4e9-2f52-405d-80fb-b4fef311cbc1\") " pod="openshift-marketplace/redhat-operators-2sbfj" Jan 26 17:13:37 crc kubenswrapper[4856]: I0126 17:13:37.140935 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40ded4e9-2f52-405d-80fb-b4fef311cbc1-catalog-content\") pod \"redhat-operators-2sbfj\" (UID: \"40ded4e9-2f52-405d-80fb-b4fef311cbc1\") " pod="openshift-marketplace/redhat-operators-2sbfj" Jan 26 17:13:37 crc kubenswrapper[4856]: I0126 17:13:37.140953 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40ded4e9-2f52-405d-80fb-b4fef311cbc1-utilities\") pod \"redhat-operators-2sbfj\" (UID: \"40ded4e9-2f52-405d-80fb-b4fef311cbc1\") " pod="openshift-marketplace/redhat-operators-2sbfj" Jan 26 17:13:37 crc kubenswrapper[4856]: I0126 17:13:37.267083 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl4bs\" (UniqueName: \"kubernetes.io/projected/40ded4e9-2f52-405d-80fb-b4fef311cbc1-kube-api-access-gl4bs\") pod \"redhat-operators-2sbfj\" (UID: \"40ded4e9-2f52-405d-80fb-b4fef311cbc1\") " pod="openshift-marketplace/redhat-operators-2sbfj" Jan 26 17:13:37 crc kubenswrapper[4856]: I0126 17:13:37.299835 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2sbfj" Jan 26 17:13:38 crc kubenswrapper[4856]: I0126 17:13:38.347765 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2sbfj"] Jan 26 17:13:38 crc kubenswrapper[4856]: W0126 17:13:38.352255 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40ded4e9_2f52_405d_80fb_b4fef311cbc1.slice/crio-f7181ecfdf3ef508ac91ce2ecf81641c9c7317da5d322beba5c68919dc3a259b WatchSource:0}: Error finding container f7181ecfdf3ef508ac91ce2ecf81641c9c7317da5d322beba5c68919dc3a259b: Status 404 returned error can't find the container with id f7181ecfdf3ef508ac91ce2ecf81641c9c7317da5d322beba5c68919dc3a259b Jan 26 17:13:39 crc kubenswrapper[4856]: I0126 17:13:39.271477 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sbfj" event={"ID":"40ded4e9-2f52-405d-80fb-b4fef311cbc1","Type":"ContainerStarted","Data":"e5ebb09819be3c66a7ddcceda562a758f2dcc2ef2bb1e57d8921f3dec667e919"} Jan 26 17:13:39 crc kubenswrapper[4856]: I0126 17:13:39.271878 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sbfj" event={"ID":"40ded4e9-2f52-405d-80fb-b4fef311cbc1","Type":"ContainerStarted","Data":"f7181ecfdf3ef508ac91ce2ecf81641c9c7317da5d322beba5c68919dc3a259b"} Jan 26 17:13:40 crc kubenswrapper[4856]: I0126 17:13:40.279242 4856 generic.go:334] "Generic (PLEG): container finished" podID="40ded4e9-2f52-405d-80fb-b4fef311cbc1" containerID="e5ebb09819be3c66a7ddcceda562a758f2dcc2ef2bb1e57d8921f3dec667e919" exitCode=0 Jan 26 17:13:40 crc kubenswrapper[4856]: I0126 17:13:40.279293 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sbfj" event={"ID":"40ded4e9-2f52-405d-80fb-b4fef311cbc1","Type":"ContainerDied","Data":"e5ebb09819be3c66a7ddcceda562a758f2dcc2ef2bb1e57d8921f3dec667e919"} Jan 26 17:13:43 crc kubenswrapper[4856]: I0126 17:13:43.313273 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sbfj" event={"ID":"40ded4e9-2f52-405d-80fb-b4fef311cbc1","Type":"ContainerStarted","Data":"9707c6c51dc5a30aeb3a0d380d791f52a183e2d158ce4607cc75a52d2079c292"} Jan 26 17:13:46 crc kubenswrapper[4856]: I0126 17:13:46.334130 4856 generic.go:334] "Generic (PLEG): container finished" podID="40ded4e9-2f52-405d-80fb-b4fef311cbc1" containerID="9707c6c51dc5a30aeb3a0d380d791f52a183e2d158ce4607cc75a52d2079c292" exitCode=0 Jan 26 17:13:46 crc kubenswrapper[4856]: I0126 17:13:46.334424 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sbfj" event={"ID":"40ded4e9-2f52-405d-80fb-b4fef311cbc1","Type":"ContainerDied","Data":"9707c6c51dc5a30aeb3a0d380d791f52a183e2d158ce4607cc75a52d2079c292"} Jan 26 17:13:47 crc kubenswrapper[4856]: I0126 17:13:47.468868 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sbfj" event={"ID":"40ded4e9-2f52-405d-80fb-b4fef311cbc1","Type":"ContainerStarted","Data":"8f51bdcd1b85974530b2a951fcef71cea685b1e499cbb81c32b339883c0f05ca"} Jan 26 17:13:47 crc kubenswrapper[4856]: I0126 17:13:47.552040 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2sbfj" podStartSLOduration=5.094333378 podStartE2EDuration="11.551992493s" podCreationTimestamp="2026-01-26 17:13:36 +0000 UTC" firstStartedPulling="2026-01-26 17:13:40.282435118 +0000 UTC m=+916.235689099" lastFinishedPulling="2026-01-26 17:13:46.740094233 +0000 UTC m=+922.693348214" observedRunningTime="2026-01-26 17:13:47.550957833 +0000 UTC m=+923.504211834" watchObservedRunningTime="2026-01-26 17:13:47.551992493 +0000 UTC m=+923.505246494" Jan 26 17:13:57 crc kubenswrapper[4856]: I0126 17:13:57.299971 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2sbfj" Jan 26 17:13:57 crc kubenswrapper[4856]: I0126 17:13:57.302006 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2sbfj" Jan 26 17:13:57 crc kubenswrapper[4856]: I0126 17:13:57.347324 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2sbfj" Jan 26 17:13:57 crc kubenswrapper[4856]: I0126 17:13:57.778221 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2sbfj" Jan 26 17:13:57 crc kubenswrapper[4856]: I0126 17:13:57.830048 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2sbfj"] Jan 26 17:13:59 crc kubenswrapper[4856]: I0126 17:13:59.750717 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2sbfj" podUID="40ded4e9-2f52-405d-80fb-b4fef311cbc1" containerName="registry-server" containerID="cri-o://8f51bdcd1b85974530b2a951fcef71cea685b1e499cbb81c32b339883c0f05ca" gracePeriod=2 Jan 26 17:14:00 crc kubenswrapper[4856]: I0126 17:14:00.172442 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2sbfj" Jan 26 17:14:00 crc kubenswrapper[4856]: I0126 17:14:00.347053 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gl4bs\" (UniqueName: \"kubernetes.io/projected/40ded4e9-2f52-405d-80fb-b4fef311cbc1-kube-api-access-gl4bs\") pod \"40ded4e9-2f52-405d-80fb-b4fef311cbc1\" (UID: \"40ded4e9-2f52-405d-80fb-b4fef311cbc1\") " Jan 26 17:14:00 crc kubenswrapper[4856]: I0126 17:14:00.347136 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40ded4e9-2f52-405d-80fb-b4fef311cbc1-utilities\") pod \"40ded4e9-2f52-405d-80fb-b4fef311cbc1\" (UID: \"40ded4e9-2f52-405d-80fb-b4fef311cbc1\") " Jan 26 17:14:00 crc kubenswrapper[4856]: I0126 17:14:00.347268 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40ded4e9-2f52-405d-80fb-b4fef311cbc1-catalog-content\") pod \"40ded4e9-2f52-405d-80fb-b4fef311cbc1\" (UID: \"40ded4e9-2f52-405d-80fb-b4fef311cbc1\") " Jan 26 17:14:00 crc kubenswrapper[4856]: I0126 17:14:00.348180 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40ded4e9-2f52-405d-80fb-b4fef311cbc1-utilities" (OuterVolumeSpecName: "utilities") pod "40ded4e9-2f52-405d-80fb-b4fef311cbc1" (UID: "40ded4e9-2f52-405d-80fb-b4fef311cbc1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:14:00 crc kubenswrapper[4856]: I0126 17:14:00.354117 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40ded4e9-2f52-405d-80fb-b4fef311cbc1-kube-api-access-gl4bs" (OuterVolumeSpecName: "kube-api-access-gl4bs") pod "40ded4e9-2f52-405d-80fb-b4fef311cbc1" (UID: "40ded4e9-2f52-405d-80fb-b4fef311cbc1"). InnerVolumeSpecName "kube-api-access-gl4bs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:14:00 crc kubenswrapper[4856]: I0126 17:14:00.449618 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gl4bs\" (UniqueName: \"kubernetes.io/projected/40ded4e9-2f52-405d-80fb-b4fef311cbc1-kube-api-access-gl4bs\") on node \"crc\" DevicePath \"\"" Jan 26 17:14:00 crc kubenswrapper[4856]: I0126 17:14:00.449667 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40ded4e9-2f52-405d-80fb-b4fef311cbc1-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:14:00 crc kubenswrapper[4856]: I0126 17:14:00.474875 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40ded4e9-2f52-405d-80fb-b4fef311cbc1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "40ded4e9-2f52-405d-80fb-b4fef311cbc1" (UID: "40ded4e9-2f52-405d-80fb-b4fef311cbc1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:14:00 crc kubenswrapper[4856]: I0126 17:14:00.550830 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40ded4e9-2f52-405d-80fb-b4fef311cbc1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:14:00 crc kubenswrapper[4856]: I0126 17:14:00.760246 4856 generic.go:334] "Generic (PLEG): container finished" podID="40ded4e9-2f52-405d-80fb-b4fef311cbc1" containerID="8f51bdcd1b85974530b2a951fcef71cea685b1e499cbb81c32b339883c0f05ca" exitCode=0 Jan 26 17:14:00 crc kubenswrapper[4856]: I0126 17:14:00.761266 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sbfj" event={"ID":"40ded4e9-2f52-405d-80fb-b4fef311cbc1","Type":"ContainerDied","Data":"8f51bdcd1b85974530b2a951fcef71cea685b1e499cbb81c32b339883c0f05ca"} Jan 26 17:14:00 crc kubenswrapper[4856]: I0126 17:14:00.761381 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sbfj" event={"ID":"40ded4e9-2f52-405d-80fb-b4fef311cbc1","Type":"ContainerDied","Data":"f7181ecfdf3ef508ac91ce2ecf81641c9c7317da5d322beba5c68919dc3a259b"} Jan 26 17:14:00 crc kubenswrapper[4856]: I0126 17:14:00.761500 4856 scope.go:117] "RemoveContainer" containerID="8f51bdcd1b85974530b2a951fcef71cea685b1e499cbb81c32b339883c0f05ca" Jan 26 17:14:00 crc kubenswrapper[4856]: I0126 17:14:00.761726 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2sbfj" Jan 26 17:14:00 crc kubenswrapper[4856]: I0126 17:14:00.784330 4856 scope.go:117] "RemoveContainer" containerID="9707c6c51dc5a30aeb3a0d380d791f52a183e2d158ce4607cc75a52d2079c292" Jan 26 17:14:00 crc kubenswrapper[4856]: I0126 17:14:00.797777 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2sbfj"] Jan 26 17:14:00 crc kubenswrapper[4856]: I0126 17:14:00.804563 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2sbfj"] Jan 26 17:14:00 crc kubenswrapper[4856]: I0126 17:14:00.824026 4856 scope.go:117] "RemoveContainer" containerID="e5ebb09819be3c66a7ddcceda562a758f2dcc2ef2bb1e57d8921f3dec667e919" Jan 26 17:14:00 crc kubenswrapper[4856]: I0126 17:14:00.838709 4856 scope.go:117] "RemoveContainer" containerID="8f51bdcd1b85974530b2a951fcef71cea685b1e499cbb81c32b339883c0f05ca" Jan 26 17:14:00 crc kubenswrapper[4856]: E0126 17:14:00.839184 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f51bdcd1b85974530b2a951fcef71cea685b1e499cbb81c32b339883c0f05ca\": container with ID starting with 8f51bdcd1b85974530b2a951fcef71cea685b1e499cbb81c32b339883c0f05ca not found: ID does not exist" containerID="8f51bdcd1b85974530b2a951fcef71cea685b1e499cbb81c32b339883c0f05ca" Jan 26 17:14:00 crc kubenswrapper[4856]: I0126 17:14:00.839260 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f51bdcd1b85974530b2a951fcef71cea685b1e499cbb81c32b339883c0f05ca"} err="failed to get container status \"8f51bdcd1b85974530b2a951fcef71cea685b1e499cbb81c32b339883c0f05ca\": rpc error: code = NotFound desc = could not find container \"8f51bdcd1b85974530b2a951fcef71cea685b1e499cbb81c32b339883c0f05ca\": container with ID starting with 8f51bdcd1b85974530b2a951fcef71cea685b1e499cbb81c32b339883c0f05ca not found: ID does not exist" Jan 26 17:14:00 crc kubenswrapper[4856]: I0126 17:14:00.839302 4856 scope.go:117] "RemoveContainer" containerID="9707c6c51dc5a30aeb3a0d380d791f52a183e2d158ce4607cc75a52d2079c292" Jan 26 17:14:00 crc kubenswrapper[4856]: E0126 17:14:00.839659 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9707c6c51dc5a30aeb3a0d380d791f52a183e2d158ce4607cc75a52d2079c292\": container with ID starting with 9707c6c51dc5a30aeb3a0d380d791f52a183e2d158ce4607cc75a52d2079c292 not found: ID does not exist" containerID="9707c6c51dc5a30aeb3a0d380d791f52a183e2d158ce4607cc75a52d2079c292" Jan 26 17:14:00 crc kubenswrapper[4856]: I0126 17:14:00.839692 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9707c6c51dc5a30aeb3a0d380d791f52a183e2d158ce4607cc75a52d2079c292"} err="failed to get container status \"9707c6c51dc5a30aeb3a0d380d791f52a183e2d158ce4607cc75a52d2079c292\": rpc error: code = NotFound desc = could not find container \"9707c6c51dc5a30aeb3a0d380d791f52a183e2d158ce4607cc75a52d2079c292\": container with ID starting with 9707c6c51dc5a30aeb3a0d380d791f52a183e2d158ce4607cc75a52d2079c292 not found: ID does not exist" Jan 26 17:14:00 crc kubenswrapper[4856]: I0126 17:14:00.839715 4856 scope.go:117] "RemoveContainer" containerID="e5ebb09819be3c66a7ddcceda562a758f2dcc2ef2bb1e57d8921f3dec667e919" Jan 26 17:14:00 crc kubenswrapper[4856]: E0126 17:14:00.839989 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5ebb09819be3c66a7ddcceda562a758f2dcc2ef2bb1e57d8921f3dec667e919\": container with ID starting with e5ebb09819be3c66a7ddcceda562a758f2dcc2ef2bb1e57d8921f3dec667e919 not found: ID does not exist" containerID="e5ebb09819be3c66a7ddcceda562a758f2dcc2ef2bb1e57d8921f3dec667e919" Jan 26 17:14:00 crc kubenswrapper[4856]: I0126 17:14:00.840019 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5ebb09819be3c66a7ddcceda562a758f2dcc2ef2bb1e57d8921f3dec667e919"} err="failed to get container status \"e5ebb09819be3c66a7ddcceda562a758f2dcc2ef2bb1e57d8921f3dec667e919\": rpc error: code = NotFound desc = could not find container \"e5ebb09819be3c66a7ddcceda562a758f2dcc2ef2bb1e57d8921f3dec667e919\": container with ID starting with e5ebb09819be3c66a7ddcceda562a758f2dcc2ef2bb1e57d8921f3dec667e919 not found: ID does not exist" Jan 26 17:14:01 crc kubenswrapper[4856]: I0126 17:14:01.405043 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40ded4e9-2f52-405d-80fb-b4fef311cbc1" path="/var/lib/kubelet/pods/40ded4e9-2f52-405d-80fb-b4fef311cbc1/volumes" Jan 26 17:14:03 crc kubenswrapper[4856]: I0126 17:14:03.384806 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wfmp4"] Jan 26 17:14:03 crc kubenswrapper[4856]: E0126 17:14:03.385191 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40ded4e9-2f52-405d-80fb-b4fef311cbc1" containerName="extract-utilities" Jan 26 17:14:03 crc kubenswrapper[4856]: I0126 17:14:03.385213 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="40ded4e9-2f52-405d-80fb-b4fef311cbc1" containerName="extract-utilities" Jan 26 17:14:03 crc kubenswrapper[4856]: E0126 17:14:03.385227 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40ded4e9-2f52-405d-80fb-b4fef311cbc1" containerName="registry-server" Jan 26 17:14:03 crc kubenswrapper[4856]: I0126 17:14:03.385236 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="40ded4e9-2f52-405d-80fb-b4fef311cbc1" containerName="registry-server" Jan 26 17:14:03 crc kubenswrapper[4856]: E0126 17:14:03.385250 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40ded4e9-2f52-405d-80fb-b4fef311cbc1" containerName="extract-content" Jan 26 17:14:03 crc kubenswrapper[4856]: I0126 17:14:03.385256 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="40ded4e9-2f52-405d-80fb-b4fef311cbc1" containerName="extract-content" Jan 26 17:14:03 crc kubenswrapper[4856]: I0126 17:14:03.385400 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="40ded4e9-2f52-405d-80fb-b4fef311cbc1" containerName="registry-server" Jan 26 17:14:03 crc kubenswrapper[4856]: I0126 17:14:03.386401 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wfmp4" Jan 26 17:14:03 crc kubenswrapper[4856]: I0126 17:14:03.404764 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wfmp4"] Jan 26 17:14:03 crc kubenswrapper[4856]: I0126 17:14:03.486806 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce-catalog-content\") pod \"community-operators-wfmp4\" (UID: \"c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce\") " pod="openshift-marketplace/community-operators-wfmp4" Jan 26 17:14:03 crc kubenswrapper[4856]: I0126 17:14:03.486865 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce-utilities\") pod \"community-operators-wfmp4\" (UID: \"c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce\") " pod="openshift-marketplace/community-operators-wfmp4" Jan 26 17:14:03 crc kubenswrapper[4856]: I0126 17:14:03.486902 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq2j6\" (UniqueName: \"kubernetes.io/projected/c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce-kube-api-access-rq2j6\") pod \"community-operators-wfmp4\" (UID: \"c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce\") " pod="openshift-marketplace/community-operators-wfmp4" Jan 26 17:14:03 crc kubenswrapper[4856]: I0126 17:14:03.588134 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rq2j6\" (UniqueName: \"kubernetes.io/projected/c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce-kube-api-access-rq2j6\") pod \"community-operators-wfmp4\" (UID: \"c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce\") " pod="openshift-marketplace/community-operators-wfmp4" Jan 26 17:14:03 crc kubenswrapper[4856]: I0126 17:14:03.588671 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce-catalog-content\") pod \"community-operators-wfmp4\" (UID: \"c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce\") " pod="openshift-marketplace/community-operators-wfmp4" Jan 26 17:14:03 crc kubenswrapper[4856]: I0126 17:14:03.588724 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce-utilities\") pod \"community-operators-wfmp4\" (UID: \"c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce\") " pod="openshift-marketplace/community-operators-wfmp4" Jan 26 17:14:03 crc kubenswrapper[4856]: I0126 17:14:03.589225 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce-utilities\") pod \"community-operators-wfmp4\" (UID: \"c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce\") " pod="openshift-marketplace/community-operators-wfmp4" Jan 26 17:14:03 crc kubenswrapper[4856]: I0126 17:14:03.589346 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce-catalog-content\") pod \"community-operators-wfmp4\" (UID: \"c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce\") " pod="openshift-marketplace/community-operators-wfmp4" Jan 26 17:14:03 crc kubenswrapper[4856]: I0126 17:14:03.611670 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rq2j6\" (UniqueName: \"kubernetes.io/projected/c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce-kube-api-access-rq2j6\") pod \"community-operators-wfmp4\" (UID: \"c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce\") " pod="openshift-marketplace/community-operators-wfmp4" Jan 26 17:14:03 crc kubenswrapper[4856]: I0126 17:14:03.714335 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wfmp4" Jan 26 17:14:03 crc kubenswrapper[4856]: I0126 17:14:03.960125 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wfmp4"] Jan 26 17:14:04 crc kubenswrapper[4856]: I0126 17:14:04.793132 4856 generic.go:334] "Generic (PLEG): container finished" podID="c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce" containerID="f7fbcfd835618d9f317d705cee22aec914f20459f0e87d10fdb7999dbc362e73" exitCode=0 Jan 26 17:14:04 crc kubenswrapper[4856]: I0126 17:14:04.793200 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfmp4" event={"ID":"c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce","Type":"ContainerDied","Data":"f7fbcfd835618d9f317d705cee22aec914f20459f0e87d10fdb7999dbc362e73"} Jan 26 17:14:04 crc kubenswrapper[4856]: I0126 17:14:04.793227 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfmp4" event={"ID":"c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce","Type":"ContainerStarted","Data":"5fe590faf2a2e22175bca3437f2f8e5267371a1d72628c7cdbbefa6d6d6e3e3a"} Jan 26 17:14:05 crc kubenswrapper[4856]: I0126 17:14:05.801683 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfmp4" event={"ID":"c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce","Type":"ContainerStarted","Data":"4fbeea5f636f158b21305e61d7dfa24c3c339ebddc6eb377deb62b3a93364006"} Jan 26 17:14:06 crc kubenswrapper[4856]: I0126 17:14:06.810939 4856 generic.go:334] "Generic (PLEG): container finished" podID="c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce" containerID="4fbeea5f636f158b21305e61d7dfa24c3c339ebddc6eb377deb62b3a93364006" exitCode=0 Jan 26 17:14:06 crc kubenswrapper[4856]: I0126 17:14:06.811069 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfmp4" event={"ID":"c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce","Type":"ContainerDied","Data":"4fbeea5f636f158b21305e61d7dfa24c3c339ebddc6eb377deb62b3a93364006"} Jan 26 17:14:08 crc kubenswrapper[4856]: I0126 17:14:08.847410 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfmp4" event={"ID":"c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce","Type":"ContainerStarted","Data":"77a7df0f62462fb40d0e58d76dda48ebc4b0ee8b1269870dc7102ccd1a2e6a09"} Jan 26 17:14:08 crc kubenswrapper[4856]: I0126 17:14:08.870955 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wfmp4" podStartSLOduration=3.191299689 podStartE2EDuration="5.870904191s" podCreationTimestamp="2026-01-26 17:14:03 +0000 UTC" firstStartedPulling="2026-01-26 17:14:04.795085118 +0000 UTC m=+940.748339099" lastFinishedPulling="2026-01-26 17:14:07.4746896 +0000 UTC m=+943.427943601" observedRunningTime="2026-01-26 17:14:08.870741596 +0000 UTC m=+944.823995587" watchObservedRunningTime="2026-01-26 17:14:08.870904191 +0000 UTC m=+944.824158172" Jan 26 17:14:13 crc kubenswrapper[4856]: I0126 17:14:13.715107 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wfmp4" Jan 26 17:14:13 crc kubenswrapper[4856]: I0126 17:14:13.715394 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wfmp4" Jan 26 17:14:13 crc kubenswrapper[4856]: I0126 17:14:13.756821 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wfmp4" Jan 26 17:14:13 crc kubenswrapper[4856]: I0126 17:14:13.994746 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wfmp4" Jan 26 17:14:14 crc kubenswrapper[4856]: I0126 17:14:14.047379 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wfmp4"] Jan 26 17:14:15 crc kubenswrapper[4856]: I0126 17:14:15.909108 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wfmp4" podUID="c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce" containerName="registry-server" containerID="cri-o://77a7df0f62462fb40d0e58d76dda48ebc4b0ee8b1269870dc7102ccd1a2e6a09" gracePeriod=2 Jan 26 17:14:19 crc kubenswrapper[4856]: I0126 17:14:19.870424 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wfmp4" Jan 26 17:14:19 crc kubenswrapper[4856]: I0126 17:14:19.941887 4856 generic.go:334] "Generic (PLEG): container finished" podID="c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce" containerID="77a7df0f62462fb40d0e58d76dda48ebc4b0ee8b1269870dc7102ccd1a2e6a09" exitCode=0 Jan 26 17:14:19 crc kubenswrapper[4856]: I0126 17:14:19.941934 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfmp4" event={"ID":"c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce","Type":"ContainerDied","Data":"77a7df0f62462fb40d0e58d76dda48ebc4b0ee8b1269870dc7102ccd1a2e6a09"} Jan 26 17:14:19 crc kubenswrapper[4856]: I0126 17:14:19.941950 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wfmp4" Jan 26 17:14:19 crc kubenswrapper[4856]: I0126 17:14:19.941976 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfmp4" event={"ID":"c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce","Type":"ContainerDied","Data":"5fe590faf2a2e22175bca3437f2f8e5267371a1d72628c7cdbbefa6d6d6e3e3a"} Jan 26 17:14:19 crc kubenswrapper[4856]: I0126 17:14:19.942030 4856 scope.go:117] "RemoveContainer" containerID="77a7df0f62462fb40d0e58d76dda48ebc4b0ee8b1269870dc7102ccd1a2e6a09" Jan 26 17:14:19 crc kubenswrapper[4856]: I0126 17:14:19.954051 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rq2j6\" (UniqueName: \"kubernetes.io/projected/c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce-kube-api-access-rq2j6\") pod \"c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce\" (UID: \"c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce\") " Jan 26 17:14:19 crc kubenswrapper[4856]: I0126 17:14:19.954299 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce-catalog-content\") pod \"c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce\" (UID: \"c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce\") " Jan 26 17:14:19 crc kubenswrapper[4856]: I0126 17:14:19.954333 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce-utilities\") pod \"c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce\" (UID: \"c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce\") " Jan 26 17:14:19 crc kubenswrapper[4856]: I0126 17:14:19.955256 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce-utilities" (OuterVolumeSpecName: "utilities") pod "c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce" (UID: "c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:14:19 crc kubenswrapper[4856]: I0126 17:14:19.960741 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce-kube-api-access-rq2j6" (OuterVolumeSpecName: "kube-api-access-rq2j6") pod "c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce" (UID: "c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce"). InnerVolumeSpecName "kube-api-access-rq2j6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:14:19 crc kubenswrapper[4856]: I0126 17:14:19.961853 4856 scope.go:117] "RemoveContainer" containerID="4fbeea5f636f158b21305e61d7dfa24c3c339ebddc6eb377deb62b3a93364006" Jan 26 17:14:20 crc kubenswrapper[4856]: I0126 17:14:20.002335 4856 scope.go:117] "RemoveContainer" containerID="f7fbcfd835618d9f317d705cee22aec914f20459f0e87d10fdb7999dbc362e73" Jan 26 17:14:20 crc kubenswrapper[4856]: I0126 17:14:20.015237 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce" (UID: "c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:14:20 crc kubenswrapper[4856]: I0126 17:14:20.024788 4856 scope.go:117] "RemoveContainer" containerID="77a7df0f62462fb40d0e58d76dda48ebc4b0ee8b1269870dc7102ccd1a2e6a09" Jan 26 17:14:20 crc kubenswrapper[4856]: E0126 17:14:20.025961 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77a7df0f62462fb40d0e58d76dda48ebc4b0ee8b1269870dc7102ccd1a2e6a09\": container with ID starting with 77a7df0f62462fb40d0e58d76dda48ebc4b0ee8b1269870dc7102ccd1a2e6a09 not found: ID does not exist" containerID="77a7df0f62462fb40d0e58d76dda48ebc4b0ee8b1269870dc7102ccd1a2e6a09" Jan 26 17:14:20 crc kubenswrapper[4856]: I0126 17:14:20.026016 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77a7df0f62462fb40d0e58d76dda48ebc4b0ee8b1269870dc7102ccd1a2e6a09"} err="failed to get container status \"77a7df0f62462fb40d0e58d76dda48ebc4b0ee8b1269870dc7102ccd1a2e6a09\": rpc error: code = NotFound desc = could not find container \"77a7df0f62462fb40d0e58d76dda48ebc4b0ee8b1269870dc7102ccd1a2e6a09\": container with ID starting with 77a7df0f62462fb40d0e58d76dda48ebc4b0ee8b1269870dc7102ccd1a2e6a09 not found: ID does not exist" Jan 26 17:14:20 crc kubenswrapper[4856]: I0126 17:14:20.026045 4856 scope.go:117] "RemoveContainer" containerID="4fbeea5f636f158b21305e61d7dfa24c3c339ebddc6eb377deb62b3a93364006" Jan 26 17:14:20 crc kubenswrapper[4856]: E0126 17:14:20.026273 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fbeea5f636f158b21305e61d7dfa24c3c339ebddc6eb377deb62b3a93364006\": container with ID starting with 4fbeea5f636f158b21305e61d7dfa24c3c339ebddc6eb377deb62b3a93364006 not found: ID does not exist" containerID="4fbeea5f636f158b21305e61d7dfa24c3c339ebddc6eb377deb62b3a93364006" Jan 26 17:14:20 crc kubenswrapper[4856]: I0126 17:14:20.026299 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fbeea5f636f158b21305e61d7dfa24c3c339ebddc6eb377deb62b3a93364006"} err="failed to get container status \"4fbeea5f636f158b21305e61d7dfa24c3c339ebddc6eb377deb62b3a93364006\": rpc error: code = NotFound desc = could not find container \"4fbeea5f636f158b21305e61d7dfa24c3c339ebddc6eb377deb62b3a93364006\": container with ID starting with 4fbeea5f636f158b21305e61d7dfa24c3c339ebddc6eb377deb62b3a93364006 not found: ID does not exist" Jan 26 17:14:20 crc kubenswrapper[4856]: I0126 17:14:20.026312 4856 scope.go:117] "RemoveContainer" containerID="f7fbcfd835618d9f317d705cee22aec914f20459f0e87d10fdb7999dbc362e73" Jan 26 17:14:20 crc kubenswrapper[4856]: E0126 17:14:20.028018 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7fbcfd835618d9f317d705cee22aec914f20459f0e87d10fdb7999dbc362e73\": container with ID starting with f7fbcfd835618d9f317d705cee22aec914f20459f0e87d10fdb7999dbc362e73 not found: ID does not exist" containerID="f7fbcfd835618d9f317d705cee22aec914f20459f0e87d10fdb7999dbc362e73" Jan 26 17:14:20 crc kubenswrapper[4856]: I0126 17:14:20.028048 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7fbcfd835618d9f317d705cee22aec914f20459f0e87d10fdb7999dbc362e73"} err="failed to get container status \"f7fbcfd835618d9f317d705cee22aec914f20459f0e87d10fdb7999dbc362e73\": rpc error: code = NotFound desc = could not find container \"f7fbcfd835618d9f317d705cee22aec914f20459f0e87d10fdb7999dbc362e73\": container with ID starting with f7fbcfd835618d9f317d705cee22aec914f20459f0e87d10fdb7999dbc362e73 not found: ID does not exist" Jan 26 17:14:20 crc kubenswrapper[4856]: I0126 17:14:20.056635 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:14:20 crc kubenswrapper[4856]: I0126 17:14:20.056685 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:14:20 crc kubenswrapper[4856]: I0126 17:14:20.056695 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rq2j6\" (UniqueName: \"kubernetes.io/projected/c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce-kube-api-access-rq2j6\") on node \"crc\" DevicePath \"\"" Jan 26 17:14:20 crc kubenswrapper[4856]: I0126 17:14:20.275004 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wfmp4"] Jan 26 17:14:20 crc kubenswrapper[4856]: I0126 17:14:20.288226 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wfmp4"] Jan 26 17:14:21 crc kubenswrapper[4856]: I0126 17:14:21.403217 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce" path="/var/lib/kubelet/pods/c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce/volumes" Jan 26 17:15:00 crc kubenswrapper[4856]: I0126 17:15:00.165510 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490795-qbctw"] Jan 26 17:15:00 crc kubenswrapper[4856]: E0126 17:15:00.166723 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce" containerName="extract-content" Jan 26 17:15:00 crc kubenswrapper[4856]: I0126 17:15:00.166761 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce" containerName="extract-content" Jan 26 17:15:00 crc kubenswrapper[4856]: E0126 17:15:00.166778 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce" containerName="extract-utilities" Jan 26 17:15:00 crc kubenswrapper[4856]: I0126 17:15:00.166790 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce" containerName="extract-utilities" Jan 26 17:15:00 crc kubenswrapper[4856]: E0126 17:15:00.166837 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce" containerName="registry-server" Jan 26 17:15:00 crc kubenswrapper[4856]: I0126 17:15:00.166857 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce" containerName="registry-server" Jan 26 17:15:00 crc kubenswrapper[4856]: I0126 17:15:00.167120 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0fe9943-bcf5-4b7d-b093-5ee2e453b2ce" containerName="registry-server" Jan 26 17:15:00 crc kubenswrapper[4856]: I0126 17:15:00.168051 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-qbctw" Jan 26 17:15:00 crc kubenswrapper[4856]: I0126 17:15:00.170114 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 17:15:00 crc kubenswrapper[4856]: I0126 17:15:00.171669 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 17:15:00 crc kubenswrapper[4856]: I0126 17:15:00.177281 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490795-qbctw"] Jan 26 17:15:00 crc kubenswrapper[4856]: I0126 17:15:00.352655 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3f61a40-5427-4fe8-89d2-92b71f9e1052-config-volume\") pod \"collect-profiles-29490795-qbctw\" (UID: \"b3f61a40-5427-4fe8-89d2-92b71f9e1052\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-qbctw" Jan 26 17:15:00 crc kubenswrapper[4856]: I0126 17:15:00.352740 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3f61a40-5427-4fe8-89d2-92b71f9e1052-secret-volume\") pod \"collect-profiles-29490795-qbctw\" (UID: \"b3f61a40-5427-4fe8-89d2-92b71f9e1052\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-qbctw" Jan 26 17:15:00 crc kubenswrapper[4856]: I0126 17:15:00.352811 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxp5k\" (UniqueName: \"kubernetes.io/projected/b3f61a40-5427-4fe8-89d2-92b71f9e1052-kube-api-access-vxp5k\") pod \"collect-profiles-29490795-qbctw\" (UID: \"b3f61a40-5427-4fe8-89d2-92b71f9e1052\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-qbctw" Jan 26 17:15:00 crc kubenswrapper[4856]: I0126 17:15:00.455337 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3f61a40-5427-4fe8-89d2-92b71f9e1052-config-volume\") pod \"collect-profiles-29490795-qbctw\" (UID: \"b3f61a40-5427-4fe8-89d2-92b71f9e1052\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-qbctw" Jan 26 17:15:00 crc kubenswrapper[4856]: I0126 17:15:00.455386 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3f61a40-5427-4fe8-89d2-92b71f9e1052-secret-volume\") pod \"collect-profiles-29490795-qbctw\" (UID: \"b3f61a40-5427-4fe8-89d2-92b71f9e1052\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-qbctw" Jan 26 17:15:00 crc kubenswrapper[4856]: I0126 17:15:00.455422 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxp5k\" (UniqueName: \"kubernetes.io/projected/b3f61a40-5427-4fe8-89d2-92b71f9e1052-kube-api-access-vxp5k\") pod \"collect-profiles-29490795-qbctw\" (UID: \"b3f61a40-5427-4fe8-89d2-92b71f9e1052\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-qbctw" Jan 26 17:15:00 crc kubenswrapper[4856]: I0126 17:15:00.456390 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3f61a40-5427-4fe8-89d2-92b71f9e1052-config-volume\") pod \"collect-profiles-29490795-qbctw\" (UID: \"b3f61a40-5427-4fe8-89d2-92b71f9e1052\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-qbctw" Jan 26 17:15:00 crc kubenswrapper[4856]: I0126 17:15:00.461650 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3f61a40-5427-4fe8-89d2-92b71f9e1052-secret-volume\") pod \"collect-profiles-29490795-qbctw\" (UID: \"b3f61a40-5427-4fe8-89d2-92b71f9e1052\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-qbctw" Jan 26 17:15:00 crc kubenswrapper[4856]: I0126 17:15:00.471998 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxp5k\" (UniqueName: \"kubernetes.io/projected/b3f61a40-5427-4fe8-89d2-92b71f9e1052-kube-api-access-vxp5k\") pod \"collect-profiles-29490795-qbctw\" (UID: \"b3f61a40-5427-4fe8-89d2-92b71f9e1052\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-qbctw" Jan 26 17:15:00 crc kubenswrapper[4856]: I0126 17:15:00.491849 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-qbctw" Jan 26 17:15:00 crc kubenswrapper[4856]: I0126 17:15:00.905263 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490795-qbctw"] Jan 26 17:15:01 crc kubenswrapper[4856]: I0126 17:15:01.265458 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-qbctw" event={"ID":"b3f61a40-5427-4fe8-89d2-92b71f9e1052","Type":"ContainerStarted","Data":"3f0f867caa951dd961753ccea7d1191209079f2d8ed4c2ee8073b09d12c6f2aa"} Jan 26 17:15:02 crc kubenswrapper[4856]: I0126 17:15:02.274721 4856 generic.go:334] "Generic (PLEG): container finished" podID="b3f61a40-5427-4fe8-89d2-92b71f9e1052" containerID="01b95cef7e01f8893038633345035674376d741feceba3066786508454c91270" exitCode=0 Jan 26 17:15:02 crc kubenswrapper[4856]: I0126 17:15:02.275048 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-qbctw" event={"ID":"b3f61a40-5427-4fe8-89d2-92b71f9e1052","Type":"ContainerDied","Data":"01b95cef7e01f8893038633345035674376d741feceba3066786508454c91270"} Jan 26 17:15:03 crc kubenswrapper[4856]: I0126 17:15:03.597620 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-qbctw" Jan 26 17:15:03 crc kubenswrapper[4856]: I0126 17:15:03.797593 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3f61a40-5427-4fe8-89d2-92b71f9e1052-config-volume\") pod \"b3f61a40-5427-4fe8-89d2-92b71f9e1052\" (UID: \"b3f61a40-5427-4fe8-89d2-92b71f9e1052\") " Jan 26 17:15:03 crc kubenswrapper[4856]: I0126 17:15:03.797643 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxp5k\" (UniqueName: \"kubernetes.io/projected/b3f61a40-5427-4fe8-89d2-92b71f9e1052-kube-api-access-vxp5k\") pod \"b3f61a40-5427-4fe8-89d2-92b71f9e1052\" (UID: \"b3f61a40-5427-4fe8-89d2-92b71f9e1052\") " Jan 26 17:15:03 crc kubenswrapper[4856]: I0126 17:15:03.797774 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3f61a40-5427-4fe8-89d2-92b71f9e1052-secret-volume\") pod \"b3f61a40-5427-4fe8-89d2-92b71f9e1052\" (UID: \"b3f61a40-5427-4fe8-89d2-92b71f9e1052\") " Jan 26 17:15:03 crc kubenswrapper[4856]: I0126 17:15:03.798569 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3f61a40-5427-4fe8-89d2-92b71f9e1052-config-volume" (OuterVolumeSpecName: "config-volume") pod "b3f61a40-5427-4fe8-89d2-92b71f9e1052" (UID: "b3f61a40-5427-4fe8-89d2-92b71f9e1052"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:15:03 crc kubenswrapper[4856]: I0126 17:15:03.803482 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3f61a40-5427-4fe8-89d2-92b71f9e1052-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b3f61a40-5427-4fe8-89d2-92b71f9e1052" (UID: "b3f61a40-5427-4fe8-89d2-92b71f9e1052"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:15:03 crc kubenswrapper[4856]: I0126 17:15:03.804163 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3f61a40-5427-4fe8-89d2-92b71f9e1052-kube-api-access-vxp5k" (OuterVolumeSpecName: "kube-api-access-vxp5k") pod "b3f61a40-5427-4fe8-89d2-92b71f9e1052" (UID: "b3f61a40-5427-4fe8-89d2-92b71f9e1052"). InnerVolumeSpecName "kube-api-access-vxp5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:15:03 crc kubenswrapper[4856]: I0126 17:15:03.899026 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxp5k\" (UniqueName: \"kubernetes.io/projected/b3f61a40-5427-4fe8-89d2-92b71f9e1052-kube-api-access-vxp5k\") on node \"crc\" DevicePath \"\"" Jan 26 17:15:03 crc kubenswrapper[4856]: I0126 17:15:03.899063 4856 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3f61a40-5427-4fe8-89d2-92b71f9e1052-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 17:15:03 crc kubenswrapper[4856]: I0126 17:15:03.899075 4856 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3f61a40-5427-4fe8-89d2-92b71f9e1052-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 17:15:04 crc kubenswrapper[4856]: I0126 17:15:04.291669 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-qbctw" event={"ID":"b3f61a40-5427-4fe8-89d2-92b71f9e1052","Type":"ContainerDied","Data":"3f0f867caa951dd961753ccea7d1191209079f2d8ed4c2ee8073b09d12c6f2aa"} Jan 26 17:15:04 crc kubenswrapper[4856]: I0126 17:15:04.291725 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f0f867caa951dd961753ccea7d1191209079f2d8ed4c2ee8073b09d12c6f2aa" Jan 26 17:15:04 crc kubenswrapper[4856]: I0126 17:15:04.292231 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-qbctw" Jan 26 17:15:08 crc kubenswrapper[4856]: I0126 17:15:08.318004 4856 generic.go:334] "Generic (PLEG): container finished" podID="e3f6dcf4-c152-4a81-8e1d-1fdf469be581" containerID="acf8873d5a9fd2dc945aa7f942f92399d79aa34d23d46be85cf69d51f18751c1" exitCode=0 Jan 26 17:15:08 crc kubenswrapper[4856]: I0126 17:15:08.318065 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"e3f6dcf4-c152-4a81-8e1d-1fdf469be581","Type":"ContainerDied","Data":"acf8873d5a9fd2dc945aa7f942f92399d79aa34d23d46be85cf69d51f18751c1"} Jan 26 17:15:09 crc kubenswrapper[4856]: I0126 17:15:09.618782 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 17:15:09 crc kubenswrapper[4856]: I0126 17:15:09.678689 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-buildcachedir\") pod \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " Jan 26 17:15:09 crc kubenswrapper[4856]: I0126 17:15:09.678830 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-build-proxy-ca-bundles\") pod \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " Jan 26 17:15:09 crc kubenswrapper[4856]: I0126 17:15:09.678840 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "e3f6dcf4-c152-4a81-8e1d-1fdf469be581" (UID: "e3f6dcf4-c152-4a81-8e1d-1fdf469be581"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:15:09 crc kubenswrapper[4856]: I0126 17:15:09.678865 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-build-system-configs\") pod \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " Jan 26 17:15:09 crc kubenswrapper[4856]: I0126 17:15:09.678940 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-build-blob-cache\") pod \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " Jan 26 17:15:09 crc kubenswrapper[4856]: I0126 17:15:09.679003 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-buildworkdir\") pod \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " Jan 26 17:15:09 crc kubenswrapper[4856]: I0126 17:15:09.679025 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spxgc\" (UniqueName: \"kubernetes.io/projected/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-kube-api-access-spxgc\") pod \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " Jan 26 17:15:09 crc kubenswrapper[4856]: I0126 17:15:09.679048 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-container-storage-root\") pod \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " Jan 26 17:15:09 crc kubenswrapper[4856]: I0126 17:15:09.679075 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-build-ca-bundles\") pod \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " Jan 26 17:15:09 crc kubenswrapper[4856]: I0126 17:15:09.679134 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-container-storage-run\") pod \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " Jan 26 17:15:09 crc kubenswrapper[4856]: I0126 17:15:09.679154 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-builder-dockercfg-8h4xs-pull\") pod \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " Jan 26 17:15:09 crc kubenswrapper[4856]: I0126 17:15:09.679261 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-builder-dockercfg-8h4xs-push\") pod \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " Jan 26 17:15:09 crc kubenswrapper[4856]: I0126 17:15:09.679287 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-node-pullsecrets\") pod \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\" (UID: \"e3f6dcf4-c152-4a81-8e1d-1fdf469be581\") " Jan 26 17:15:09 crc kubenswrapper[4856]: I0126 17:15:09.679751 4856 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 26 17:15:09 crc kubenswrapper[4856]: I0126 17:15:09.679796 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "e3f6dcf4-c152-4a81-8e1d-1fdf469be581" (UID: "e3f6dcf4-c152-4a81-8e1d-1fdf469be581"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:15:09 crc kubenswrapper[4856]: I0126 17:15:09.680494 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "e3f6dcf4-c152-4a81-8e1d-1fdf469be581" (UID: "e3f6dcf4-c152-4a81-8e1d-1fdf469be581"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:15:09 crc kubenswrapper[4856]: I0126 17:15:09.681142 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "e3f6dcf4-c152-4a81-8e1d-1fdf469be581" (UID: "e3f6dcf4-c152-4a81-8e1d-1fdf469be581"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:15:09 crc kubenswrapper[4856]: I0126 17:15:09.681889 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "e3f6dcf4-c152-4a81-8e1d-1fdf469be581" (UID: "e3f6dcf4-c152-4a81-8e1d-1fdf469be581"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:15:09 crc kubenswrapper[4856]: I0126 17:15:09.680463 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "e3f6dcf4-c152-4a81-8e1d-1fdf469be581" (UID: "e3f6dcf4-c152-4a81-8e1d-1fdf469be581"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:15:09 crc kubenswrapper[4856]: I0126 17:15:09.686261 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-builder-dockercfg-8h4xs-pull" (OuterVolumeSpecName: "builder-dockercfg-8h4xs-pull") pod "e3f6dcf4-c152-4a81-8e1d-1fdf469be581" (UID: "e3f6dcf4-c152-4a81-8e1d-1fdf469be581"). InnerVolumeSpecName "builder-dockercfg-8h4xs-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:15:09 crc kubenswrapper[4856]: I0126 17:15:09.686382 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-kube-api-access-spxgc" (OuterVolumeSpecName: "kube-api-access-spxgc") pod "e3f6dcf4-c152-4a81-8e1d-1fdf469be581" (UID: "e3f6dcf4-c152-4a81-8e1d-1fdf469be581"). InnerVolumeSpecName "kube-api-access-spxgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:15:09 crc kubenswrapper[4856]: I0126 17:15:09.688656 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-builder-dockercfg-8h4xs-push" (OuterVolumeSpecName: "builder-dockercfg-8h4xs-push") pod "e3f6dcf4-c152-4a81-8e1d-1fdf469be581" (UID: "e3f6dcf4-c152-4a81-8e1d-1fdf469be581"). InnerVolumeSpecName "builder-dockercfg-8h4xs-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:15:09 crc kubenswrapper[4856]: I0126 17:15:09.727309 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "e3f6dcf4-c152-4a81-8e1d-1fdf469be581" (UID: "e3f6dcf4-c152-4a81-8e1d-1fdf469be581"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:15:09 crc kubenswrapper[4856]: I0126 17:15:09.781007 4856 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 26 17:15:09 crc kubenswrapper[4856]: I0126 17:15:09.781043 4856 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-builder-dockercfg-8h4xs-pull\") on node \"crc\" DevicePath \"\"" Jan 26 17:15:09 crc kubenswrapper[4856]: I0126 17:15:09.781055 4856 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-builder-dockercfg-8h4xs-push\") on node \"crc\" DevicePath \"\"" Jan 26 17:15:09 crc kubenswrapper[4856]: I0126 17:15:09.781067 4856 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 26 17:15:09 crc kubenswrapper[4856]: I0126 17:15:09.781079 4856 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 17:15:09 crc kubenswrapper[4856]: I0126 17:15:09.781089 4856 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 26 17:15:09 crc kubenswrapper[4856]: I0126 17:15:09.781102 4856 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 26 17:15:09 crc kubenswrapper[4856]: I0126 17:15:09.781113 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spxgc\" (UniqueName: \"kubernetes.io/projected/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-kube-api-access-spxgc\") on node \"crc\" DevicePath \"\"" Jan 26 17:15:09 crc kubenswrapper[4856]: I0126 17:15:09.781124 4856 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 17:15:09 crc kubenswrapper[4856]: I0126 17:15:09.856618 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "e3f6dcf4-c152-4a81-8e1d-1fdf469be581" (UID: "e3f6dcf4-c152-4a81-8e1d-1fdf469be581"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:15:09 crc kubenswrapper[4856]: I0126 17:15:09.881928 4856 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 26 17:15:10 crc kubenswrapper[4856]: I0126 17:15:10.341799 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"e3f6dcf4-c152-4a81-8e1d-1fdf469be581","Type":"ContainerDied","Data":"f38fe849edfe940888da0c7e9589bf8433e33392b1573a13a0d673b63831ce2b"} Jan 26 17:15:10 crc kubenswrapper[4856]: I0126 17:15:10.341873 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f38fe849edfe940888da0c7e9589bf8433e33392b1573a13a0d673b63831ce2b" Jan 26 17:15:10 crc kubenswrapper[4856]: I0126 17:15:10.341893 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 17:15:11 crc kubenswrapper[4856]: I0126 17:15:11.666790 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "e3f6dcf4-c152-4a81-8e1d-1fdf469be581" (UID: "e3f6dcf4-c152-4a81-8e1d-1fdf469be581"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:15:11 crc kubenswrapper[4856]: I0126 17:15:11.688437 4856 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e3f6dcf4-c152-4a81-8e1d-1fdf469be581-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 26 17:15:14 crc kubenswrapper[4856]: I0126 17:15:14.820728 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 26 17:15:14 crc kubenswrapper[4856]: E0126 17:15:14.821387 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3f6dcf4-c152-4a81-8e1d-1fdf469be581" containerName="git-clone" Jan 26 17:15:14 crc kubenswrapper[4856]: I0126 17:15:14.821410 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3f6dcf4-c152-4a81-8e1d-1fdf469be581" containerName="git-clone" Jan 26 17:15:14 crc kubenswrapper[4856]: E0126 17:15:14.821423 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3f61a40-5427-4fe8-89d2-92b71f9e1052" containerName="collect-profiles" Jan 26 17:15:14 crc kubenswrapper[4856]: I0126 17:15:14.821429 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3f61a40-5427-4fe8-89d2-92b71f9e1052" containerName="collect-profiles" Jan 26 17:15:14 crc kubenswrapper[4856]: E0126 17:15:14.821441 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3f6dcf4-c152-4a81-8e1d-1fdf469be581" containerName="manage-dockerfile" Jan 26 17:15:14 crc kubenswrapper[4856]: I0126 17:15:14.821447 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3f6dcf4-c152-4a81-8e1d-1fdf469be581" containerName="manage-dockerfile" Jan 26 17:15:14 crc kubenswrapper[4856]: E0126 17:15:14.821456 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3f6dcf4-c152-4a81-8e1d-1fdf469be581" containerName="docker-build" Jan 26 17:15:14 crc kubenswrapper[4856]: I0126 17:15:14.821462 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3f6dcf4-c152-4a81-8e1d-1fdf469be581" containerName="docker-build" Jan 26 17:15:14 crc kubenswrapper[4856]: I0126 17:15:14.821626 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3f6dcf4-c152-4a81-8e1d-1fdf469be581" containerName="docker-build" Jan 26 17:15:14 crc kubenswrapper[4856]: I0126 17:15:14.821639 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3f61a40-5427-4fe8-89d2-92b71f9e1052" containerName="collect-profiles" Jan 26 17:15:14 crc kubenswrapper[4856]: I0126 17:15:14.822728 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 26 17:15:14 crc kubenswrapper[4856]: I0126 17:15:14.825087 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-sys-config" Jan 26 17:15:14 crc kubenswrapper[4856]: I0126 17:15:14.825811 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-global-ca" Jan 26 17:15:14 crc kubenswrapper[4856]: I0126 17:15:14.826054 4856 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-8h4xs" Jan 26 17:15:14 crc kubenswrapper[4856]: I0126 17:15:14.826115 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-ca" Jan 26 17:15:14 crc kubenswrapper[4856]: I0126 17:15:14.837418 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 26 17:15:14 crc kubenswrapper[4856]: I0126 17:15:14.955350 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ed22249-c992-4d01-a0ec-110a1ff4f786-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 26 17:15:14 crc kubenswrapper[4856]: I0126 17:15:14.955421 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/5ed22249-c992-4d01-a0ec-110a1ff4f786-builder-dockercfg-8h4xs-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 26 17:15:14 crc kubenswrapper[4856]: I0126 17:15:14.955502 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5ed22249-c992-4d01-a0ec-110a1ff4f786-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 26 17:15:14 crc kubenswrapper[4856]: I0126 17:15:14.955541 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5ed22249-c992-4d01-a0ec-110a1ff4f786-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 26 17:15:14 crc kubenswrapper[4856]: I0126 17:15:14.955559 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5ed22249-c992-4d01-a0ec-110a1ff4f786-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 26 17:15:14 crc kubenswrapper[4856]: I0126 17:15:14.955574 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5ed22249-c992-4d01-a0ec-110a1ff4f786-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 26 17:15:14 crc kubenswrapper[4856]: I0126 17:15:14.955594 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7q4r\" (UniqueName: \"kubernetes.io/projected/5ed22249-c992-4d01-a0ec-110a1ff4f786-kube-api-access-d7q4r\") pod \"smart-gateway-operator-1-build\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 26 17:15:14 crc kubenswrapper[4856]: I0126 17:15:14.955626 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5ed22249-c992-4d01-a0ec-110a1ff4f786-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 26 17:15:14 crc kubenswrapper[4856]: I0126 17:15:14.955853 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/5ed22249-c992-4d01-a0ec-110a1ff4f786-builder-dockercfg-8h4xs-push\") pod \"smart-gateway-operator-1-build\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 26 17:15:14 crc kubenswrapper[4856]: I0126 17:15:14.955972 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ed22249-c992-4d01-a0ec-110a1ff4f786-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 26 17:15:14 crc kubenswrapper[4856]: I0126 17:15:14.956182 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5ed22249-c992-4d01-a0ec-110a1ff4f786-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 26 17:15:14 crc kubenswrapper[4856]: I0126 17:15:14.956226 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5ed22249-c992-4d01-a0ec-110a1ff4f786-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 26 17:15:15 crc kubenswrapper[4856]: I0126 17:15:15.057396 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5ed22249-c992-4d01-a0ec-110a1ff4f786-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 26 17:15:15 crc kubenswrapper[4856]: I0126 17:15:15.057469 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5ed22249-c992-4d01-a0ec-110a1ff4f786-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 26 17:15:15 crc kubenswrapper[4856]: I0126 17:15:15.057503 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5ed22249-c992-4d01-a0ec-110a1ff4f786-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 26 17:15:15 crc kubenswrapper[4856]: I0126 17:15:15.057568 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7q4r\" (UniqueName: \"kubernetes.io/projected/5ed22249-c992-4d01-a0ec-110a1ff4f786-kube-api-access-d7q4r\") pod \"smart-gateway-operator-1-build\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 26 17:15:15 crc kubenswrapper[4856]: I0126 17:15:15.057584 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5ed22249-c992-4d01-a0ec-110a1ff4f786-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 26 17:15:15 crc kubenswrapper[4856]: I0126 17:15:15.057614 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5ed22249-c992-4d01-a0ec-110a1ff4f786-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 26 17:15:15 crc kubenswrapper[4856]: I0126 17:15:15.057730 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/5ed22249-c992-4d01-a0ec-110a1ff4f786-builder-dockercfg-8h4xs-push\") pod \"smart-gateway-operator-1-build\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 26 17:15:15 crc kubenswrapper[4856]: I0126 17:15:15.057851 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5ed22249-c992-4d01-a0ec-110a1ff4f786-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 26 17:15:15 crc kubenswrapper[4856]: I0126 17:15:15.058315 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5ed22249-c992-4d01-a0ec-110a1ff4f786-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 26 17:15:15 crc kubenswrapper[4856]: I0126 17:15:15.058357 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5ed22249-c992-4d01-a0ec-110a1ff4f786-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 26 17:15:15 crc kubenswrapper[4856]: I0126 17:15:15.058923 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ed22249-c992-4d01-a0ec-110a1ff4f786-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 26 17:15:15 crc kubenswrapper[4856]: I0126 17:15:15.059031 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5ed22249-c992-4d01-a0ec-110a1ff4f786-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 26 17:15:15 crc kubenswrapper[4856]: I0126 17:15:15.059076 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5ed22249-c992-4d01-a0ec-110a1ff4f786-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 26 17:15:15 crc kubenswrapper[4856]: I0126 17:15:15.059133 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ed22249-c992-4d01-a0ec-110a1ff4f786-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 26 17:15:15 crc kubenswrapper[4856]: I0126 17:15:15.059176 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/5ed22249-c992-4d01-a0ec-110a1ff4f786-builder-dockercfg-8h4xs-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 26 17:15:15 crc kubenswrapper[4856]: I0126 17:15:15.059214 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5ed22249-c992-4d01-a0ec-110a1ff4f786-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 26 17:15:15 crc kubenswrapper[4856]: I0126 17:15:15.059437 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5ed22249-c992-4d01-a0ec-110a1ff4f786-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 26 17:15:15 crc kubenswrapper[4856]: I0126 17:15:15.059788 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5ed22249-c992-4d01-a0ec-110a1ff4f786-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 26 17:15:15 crc kubenswrapper[4856]: I0126 17:15:15.060441 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5ed22249-c992-4d01-a0ec-110a1ff4f786-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 26 17:15:15 crc kubenswrapper[4856]: I0126 17:15:15.060827 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ed22249-c992-4d01-a0ec-110a1ff4f786-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 26 17:15:15 crc kubenswrapper[4856]: I0126 17:15:15.063898 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/5ed22249-c992-4d01-a0ec-110a1ff4f786-builder-dockercfg-8h4xs-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 26 17:15:15 crc kubenswrapper[4856]: I0126 17:15:15.064367 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ed22249-c992-4d01-a0ec-110a1ff4f786-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 26 17:15:15 crc kubenswrapper[4856]: I0126 17:15:15.065032 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/5ed22249-c992-4d01-a0ec-110a1ff4f786-builder-dockercfg-8h4xs-push\") pod \"smart-gateway-operator-1-build\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 26 17:15:15 crc kubenswrapper[4856]: I0126 17:15:15.081064 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7q4r\" (UniqueName: \"kubernetes.io/projected/5ed22249-c992-4d01-a0ec-110a1ff4f786-kube-api-access-d7q4r\") pod \"smart-gateway-operator-1-build\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 26 17:15:15 crc kubenswrapper[4856]: I0126 17:15:15.143398 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 26 17:15:15 crc kubenswrapper[4856]: I0126 17:15:15.582335 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 26 17:15:16 crc kubenswrapper[4856]: I0126 17:15:16.416792 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"5ed22249-c992-4d01-a0ec-110a1ff4f786","Type":"ContainerStarted","Data":"bbaa0e65ef3404876abbdc6bdb86225f8808634f7616aeafb5182074b30443dd"} Jan 26 17:15:16 crc kubenswrapper[4856]: I0126 17:15:16.417128 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"5ed22249-c992-4d01-a0ec-110a1ff4f786","Type":"ContainerStarted","Data":"32c29441f42e73b7bc00f264639641dd4797b6d25e6fdbaa53648b8d228dc4d6"} Jan 26 17:15:17 crc kubenswrapper[4856]: I0126 17:15:17.426059 4856 generic.go:334] "Generic (PLEG): container finished" podID="5ed22249-c992-4d01-a0ec-110a1ff4f786" containerID="bbaa0e65ef3404876abbdc6bdb86225f8808634f7616aeafb5182074b30443dd" exitCode=0 Jan 26 17:15:17 crc kubenswrapper[4856]: I0126 17:15:17.426167 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"5ed22249-c992-4d01-a0ec-110a1ff4f786","Type":"ContainerDied","Data":"bbaa0e65ef3404876abbdc6bdb86225f8808634f7616aeafb5182074b30443dd"} Jan 26 17:15:18 crc kubenswrapper[4856]: I0126 17:15:18.438863 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"5ed22249-c992-4d01-a0ec-110a1ff4f786","Type":"ContainerStarted","Data":"fefa4e9ff30a3471c605b939c41d47c1a8dfc5a17b0199c647a6644529b4b943"} Jan 26 17:15:18 crc kubenswrapper[4856]: I0126 17:15:18.462330 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-1-build" podStartSLOduration=4.462293644 podStartE2EDuration="4.462293644s" podCreationTimestamp="2026-01-26 17:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:15:18.460606986 +0000 UTC m=+1014.413861007" watchObservedRunningTime="2026-01-26 17:15:18.462293644 +0000 UTC m=+1014.415547675" Jan 26 17:15:25 crc kubenswrapper[4856]: I0126 17:15:25.384923 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 26 17:15:25 crc kubenswrapper[4856]: I0126 17:15:25.385688 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/smart-gateway-operator-1-build" podUID="5ed22249-c992-4d01-a0ec-110a1ff4f786" containerName="docker-build" containerID="cri-o://fefa4e9ff30a3471c605b939c41d47c1a8dfc5a17b0199c647a6644529b4b943" gracePeriod=30 Jan 26 17:15:26 crc kubenswrapper[4856]: I0126 17:15:26.938603 4856 patch_prober.go:28] interesting pod/machine-config-daemon-xm9cq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:15:26 crc kubenswrapper[4856]: I0126 17:15:26.939102 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.111419 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.113027 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.115014 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-sys-config" Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.115087 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-global-ca" Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.115706 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-ca" Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.133243 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.292081 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a2268784-2c29-45fa-8bbc-4426f4c566b6-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.292141 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a2268784-2c29-45fa-8bbc-4426f4c566b6-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.292172 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a2268784-2c29-45fa-8bbc-4426f4c566b6-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.292227 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a2268784-2c29-45fa-8bbc-4426f4c566b6-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.292850 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a2268784-2c29-45fa-8bbc-4426f4c566b6-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.293010 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a2268784-2c29-45fa-8bbc-4426f4c566b6-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.293118 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/a2268784-2c29-45fa-8bbc-4426f4c566b6-builder-dockercfg-8h4xs-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.293170 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a2268784-2c29-45fa-8bbc-4426f4c566b6-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.293237 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hs6s\" (UniqueName: \"kubernetes.io/projected/a2268784-2c29-45fa-8bbc-4426f4c566b6-kube-api-access-7hs6s\") pod \"smart-gateway-operator-2-build\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.293280 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a2268784-2c29-45fa-8bbc-4426f4c566b6-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.293363 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/a2268784-2c29-45fa-8bbc-4426f4c566b6-builder-dockercfg-8h4xs-push\") pod \"smart-gateway-operator-2-build\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.293443 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a2268784-2c29-45fa-8bbc-4426f4c566b6-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.394951 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a2268784-2c29-45fa-8bbc-4426f4c566b6-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.395062 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a2268784-2c29-45fa-8bbc-4426f4c566b6-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.395127 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/a2268784-2c29-45fa-8bbc-4426f4c566b6-builder-dockercfg-8h4xs-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.395174 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a2268784-2c29-45fa-8bbc-4426f4c566b6-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.395229 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hs6s\" (UniqueName: \"kubernetes.io/projected/a2268784-2c29-45fa-8bbc-4426f4c566b6-kube-api-access-7hs6s\") pod \"smart-gateway-operator-2-build\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.395269 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a2268784-2c29-45fa-8bbc-4426f4c566b6-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.395320 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/a2268784-2c29-45fa-8bbc-4426f4c566b6-builder-dockercfg-8h4xs-push\") pod \"smart-gateway-operator-2-build\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.395394 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a2268784-2c29-45fa-8bbc-4426f4c566b6-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.395469 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a2268784-2c29-45fa-8bbc-4426f4c566b6-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.395474 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a2268784-2c29-45fa-8bbc-4426f4c566b6-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.395506 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a2268784-2c29-45fa-8bbc-4426f4c566b6-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.395601 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a2268784-2c29-45fa-8bbc-4426f4c566b6-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.395640 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a2268784-2c29-45fa-8bbc-4426f4c566b6-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.395795 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a2268784-2c29-45fa-8bbc-4426f4c566b6-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.395472 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a2268784-2c29-45fa-8bbc-4426f4c566b6-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.396447 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a2268784-2c29-45fa-8bbc-4426f4c566b6-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.396512 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a2268784-2c29-45fa-8bbc-4426f4c566b6-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.396843 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a2268784-2c29-45fa-8bbc-4426f4c566b6-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.396947 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a2268784-2c29-45fa-8bbc-4426f4c566b6-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.396942 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a2268784-2c29-45fa-8bbc-4426f4c566b6-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.398243 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a2268784-2c29-45fa-8bbc-4426f4c566b6-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.405395 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/a2268784-2c29-45fa-8bbc-4426f4c566b6-builder-dockercfg-8h4xs-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.412163 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/a2268784-2c29-45fa-8bbc-4426f4c566b6-builder-dockercfg-8h4xs-push\") pod \"smart-gateway-operator-2-build\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.436948 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hs6s\" (UniqueName: \"kubernetes.io/projected/a2268784-2c29-45fa-8bbc-4426f4c566b6-kube-api-access-7hs6s\") pod \"smart-gateway-operator-2-build\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 26 17:15:27 crc kubenswrapper[4856]: I0126 17:15:27.726969 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 26 17:15:28 crc kubenswrapper[4856]: I0126 17:15:28.016986 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 26 17:15:28 crc kubenswrapper[4856]: I0126 17:15:28.512346 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"a2268784-2c29-45fa-8bbc-4426f4c566b6","Type":"ContainerStarted","Data":"06f2ce65083300356c56ba4f8a7f06492d9d84894e9cf8a9d78cb9fbe7bdcb6c"} Jan 26 17:15:29 crc kubenswrapper[4856]: I0126 17:15:29.523518 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_5ed22249-c992-4d01-a0ec-110a1ff4f786/docker-build/0.log" Jan 26 17:15:29 crc kubenswrapper[4856]: I0126 17:15:29.523975 4856 generic.go:334] "Generic (PLEG): container finished" podID="5ed22249-c992-4d01-a0ec-110a1ff4f786" containerID="fefa4e9ff30a3471c605b939c41d47c1a8dfc5a17b0199c647a6644529b4b943" exitCode=1 Jan 26 17:15:29 crc kubenswrapper[4856]: I0126 17:15:29.524018 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"5ed22249-c992-4d01-a0ec-110a1ff4f786","Type":"ContainerDied","Data":"fefa4e9ff30a3471c605b939c41d47c1a8dfc5a17b0199c647a6644529b4b943"} Jan 26 17:15:29 crc kubenswrapper[4856]: I0126 17:15:29.524807 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"a2268784-2c29-45fa-8bbc-4426f4c566b6","Type":"ContainerStarted","Data":"f1c91d37a26cde48c83a1293d17ffc8b332fdc2b1d02b20144bd258945a7c047"} Jan 26 17:15:29 crc kubenswrapper[4856]: I0126 17:15:29.699499 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_5ed22249-c992-4d01-a0ec-110a1ff4f786/docker-build/0.log" Jan 26 17:15:29 crc kubenswrapper[4856]: I0126 17:15:29.699907 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 26 17:15:29 crc kubenswrapper[4856]: I0126 17:15:29.928740 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ed22249-c992-4d01-a0ec-110a1ff4f786-build-ca-bundles\") pod \"5ed22249-c992-4d01-a0ec-110a1ff4f786\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " Jan 26 17:15:29 crc kubenswrapper[4856]: I0126 17:15:29.928787 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5ed22249-c992-4d01-a0ec-110a1ff4f786-build-system-configs\") pod \"5ed22249-c992-4d01-a0ec-110a1ff4f786\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " Jan 26 17:15:29 crc kubenswrapper[4856]: I0126 17:15:29.928819 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/5ed22249-c992-4d01-a0ec-110a1ff4f786-builder-dockercfg-8h4xs-pull\") pod \"5ed22249-c992-4d01-a0ec-110a1ff4f786\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " Jan 26 17:15:29 crc kubenswrapper[4856]: I0126 17:15:29.928868 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/5ed22249-c992-4d01-a0ec-110a1ff4f786-builder-dockercfg-8h4xs-push\") pod \"5ed22249-c992-4d01-a0ec-110a1ff4f786\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " Jan 26 17:15:29 crc kubenswrapper[4856]: I0126 17:15:29.928911 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5ed22249-c992-4d01-a0ec-110a1ff4f786-buildworkdir\") pod \"5ed22249-c992-4d01-a0ec-110a1ff4f786\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " Jan 26 17:15:29 crc kubenswrapper[4856]: I0126 17:15:29.928980 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5ed22249-c992-4d01-a0ec-110a1ff4f786-node-pullsecrets\") pod \"5ed22249-c992-4d01-a0ec-110a1ff4f786\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " Jan 26 17:15:29 crc kubenswrapper[4856]: I0126 17:15:29.929011 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ed22249-c992-4d01-a0ec-110a1ff4f786-build-proxy-ca-bundles\") pod \"5ed22249-c992-4d01-a0ec-110a1ff4f786\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " Jan 26 17:15:29 crc kubenswrapper[4856]: I0126 17:15:29.929058 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5ed22249-c992-4d01-a0ec-110a1ff4f786-build-blob-cache\") pod \"5ed22249-c992-4d01-a0ec-110a1ff4f786\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " Jan 26 17:15:29 crc kubenswrapper[4856]: I0126 17:15:29.929087 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7q4r\" (UniqueName: \"kubernetes.io/projected/5ed22249-c992-4d01-a0ec-110a1ff4f786-kube-api-access-d7q4r\") pod \"5ed22249-c992-4d01-a0ec-110a1ff4f786\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " Jan 26 17:15:29 crc kubenswrapper[4856]: I0126 17:15:29.929131 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5ed22249-c992-4d01-a0ec-110a1ff4f786-buildcachedir\") pod \"5ed22249-c992-4d01-a0ec-110a1ff4f786\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " Jan 26 17:15:29 crc kubenswrapper[4856]: I0126 17:15:29.929126 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ed22249-c992-4d01-a0ec-110a1ff4f786-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "5ed22249-c992-4d01-a0ec-110a1ff4f786" (UID: "5ed22249-c992-4d01-a0ec-110a1ff4f786"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:15:29 crc kubenswrapper[4856]: I0126 17:15:29.929159 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5ed22249-c992-4d01-a0ec-110a1ff4f786-container-storage-run\") pod \"5ed22249-c992-4d01-a0ec-110a1ff4f786\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " Jan 26 17:15:29 crc kubenswrapper[4856]: I0126 17:15:29.929183 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5ed22249-c992-4d01-a0ec-110a1ff4f786-container-storage-root\") pod \"5ed22249-c992-4d01-a0ec-110a1ff4f786\" (UID: \"5ed22249-c992-4d01-a0ec-110a1ff4f786\") " Jan 26 17:15:29 crc kubenswrapper[4856]: I0126 17:15:29.929443 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ed22249-c992-4d01-a0ec-110a1ff4f786-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "5ed22249-c992-4d01-a0ec-110a1ff4f786" (UID: "5ed22249-c992-4d01-a0ec-110a1ff4f786"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:15:29 crc kubenswrapper[4856]: I0126 17:15:29.929461 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ed22249-c992-4d01-a0ec-110a1ff4f786-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "5ed22249-c992-4d01-a0ec-110a1ff4f786" (UID: "5ed22249-c992-4d01-a0ec-110a1ff4f786"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:15:29 crc kubenswrapper[4856]: I0126 17:15:29.929481 4856 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5ed22249-c992-4d01-a0ec-110a1ff4f786-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 26 17:15:29 crc kubenswrapper[4856]: I0126 17:15:29.929723 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ed22249-c992-4d01-a0ec-110a1ff4f786-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "5ed22249-c992-4d01-a0ec-110a1ff4f786" (UID: "5ed22249-c992-4d01-a0ec-110a1ff4f786"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:15:29 crc kubenswrapper[4856]: I0126 17:15:29.929756 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ed22249-c992-4d01-a0ec-110a1ff4f786-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "5ed22249-c992-4d01-a0ec-110a1ff4f786" (UID: "5ed22249-c992-4d01-a0ec-110a1ff4f786"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:15:29 crc kubenswrapper[4856]: I0126 17:15:29.930040 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ed22249-c992-4d01-a0ec-110a1ff4f786-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "5ed22249-c992-4d01-a0ec-110a1ff4f786" (UID: "5ed22249-c992-4d01-a0ec-110a1ff4f786"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:15:29 crc kubenswrapper[4856]: I0126 17:15:29.931825 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ed22249-c992-4d01-a0ec-110a1ff4f786-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "5ed22249-c992-4d01-a0ec-110a1ff4f786" (UID: "5ed22249-c992-4d01-a0ec-110a1ff4f786"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:15:29 crc kubenswrapper[4856]: I0126 17:15:29.935154 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ed22249-c992-4d01-a0ec-110a1ff4f786-kube-api-access-d7q4r" (OuterVolumeSpecName: "kube-api-access-d7q4r") pod "5ed22249-c992-4d01-a0ec-110a1ff4f786" (UID: "5ed22249-c992-4d01-a0ec-110a1ff4f786"). InnerVolumeSpecName "kube-api-access-d7q4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:15:29 crc kubenswrapper[4856]: I0126 17:15:29.952513 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ed22249-c992-4d01-a0ec-110a1ff4f786-builder-dockercfg-8h4xs-push" (OuterVolumeSpecName: "builder-dockercfg-8h4xs-push") pod "5ed22249-c992-4d01-a0ec-110a1ff4f786" (UID: "5ed22249-c992-4d01-a0ec-110a1ff4f786"). InnerVolumeSpecName "builder-dockercfg-8h4xs-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:15:29 crc kubenswrapper[4856]: I0126 17:15:29.952647 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ed22249-c992-4d01-a0ec-110a1ff4f786-builder-dockercfg-8h4xs-pull" (OuterVolumeSpecName: "builder-dockercfg-8h4xs-pull") pod "5ed22249-c992-4d01-a0ec-110a1ff4f786" (UID: "5ed22249-c992-4d01-a0ec-110a1ff4f786"). InnerVolumeSpecName "builder-dockercfg-8h4xs-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:15:30 crc kubenswrapper[4856]: I0126 17:15:30.031201 4856 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5ed22249-c992-4d01-a0ec-110a1ff4f786-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 26 17:15:30 crc kubenswrapper[4856]: I0126 17:15:30.031253 4856 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ed22249-c992-4d01-a0ec-110a1ff4f786-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 17:15:30 crc kubenswrapper[4856]: I0126 17:15:30.031272 4856 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5ed22249-c992-4d01-a0ec-110a1ff4f786-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 26 17:15:30 crc kubenswrapper[4856]: I0126 17:15:30.031290 4856 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/5ed22249-c992-4d01-a0ec-110a1ff4f786-builder-dockercfg-8h4xs-pull\") on node \"crc\" DevicePath \"\"" Jan 26 17:15:30 crc kubenswrapper[4856]: I0126 17:15:30.031308 4856 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/5ed22249-c992-4d01-a0ec-110a1ff4f786-builder-dockercfg-8h4xs-push\") on node \"crc\" DevicePath \"\"" Jan 26 17:15:30 crc kubenswrapper[4856]: I0126 17:15:30.031326 4856 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5ed22249-c992-4d01-a0ec-110a1ff4f786-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 26 17:15:30 crc kubenswrapper[4856]: I0126 17:15:30.031344 4856 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ed22249-c992-4d01-a0ec-110a1ff4f786-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 17:15:30 crc kubenswrapper[4856]: I0126 17:15:30.031361 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7q4r\" (UniqueName: \"kubernetes.io/projected/5ed22249-c992-4d01-a0ec-110a1ff4f786-kube-api-access-d7q4r\") on node \"crc\" DevicePath \"\"" Jan 26 17:15:30 crc kubenswrapper[4856]: I0126 17:15:30.031378 4856 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5ed22249-c992-4d01-a0ec-110a1ff4f786-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 26 17:15:30 crc kubenswrapper[4856]: I0126 17:15:30.323879 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ed22249-c992-4d01-a0ec-110a1ff4f786-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "5ed22249-c992-4d01-a0ec-110a1ff4f786" (UID: "5ed22249-c992-4d01-a0ec-110a1ff4f786"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:15:30 crc kubenswrapper[4856]: I0126 17:15:30.335356 4856 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5ed22249-c992-4d01-a0ec-110a1ff4f786-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 26 17:15:30 crc kubenswrapper[4856]: I0126 17:15:30.534676 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_5ed22249-c992-4d01-a0ec-110a1ff4f786/docker-build/0.log" Jan 26 17:15:30 crc kubenswrapper[4856]: I0126 17:15:30.536933 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 26 17:15:30 crc kubenswrapper[4856]: I0126 17:15:30.536943 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"5ed22249-c992-4d01-a0ec-110a1ff4f786","Type":"ContainerDied","Data":"32c29441f42e73b7bc00f264639641dd4797b6d25e6fdbaa53648b8d228dc4d6"} Jan 26 17:15:30 crc kubenswrapper[4856]: I0126 17:15:30.537224 4856 scope.go:117] "RemoveContainer" containerID="fefa4e9ff30a3471c605b939c41d47c1a8dfc5a17b0199c647a6644529b4b943" Jan 26 17:15:32 crc kubenswrapper[4856]: I0126 17:15:32.941983 4856 scope.go:117] "RemoveContainer" containerID="bbaa0e65ef3404876abbdc6bdb86225f8808634f7616aeafb5182074b30443dd" Jan 26 17:15:33 crc kubenswrapper[4856]: I0126 17:15:33.161380 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ed22249-c992-4d01-a0ec-110a1ff4f786-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "5ed22249-c992-4d01-a0ec-110a1ff4f786" (UID: "5ed22249-c992-4d01-a0ec-110a1ff4f786"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:15:33 crc kubenswrapper[4856]: I0126 17:15:33.174715 4856 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5ed22249-c992-4d01-a0ec-110a1ff4f786-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 26 17:15:33 crc kubenswrapper[4856]: I0126 17:15:33.277094 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 26 17:15:33 crc kubenswrapper[4856]: I0126 17:15:33.289485 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 26 17:15:33 crc kubenswrapper[4856]: I0126 17:15:33.405429 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ed22249-c992-4d01-a0ec-110a1ff4f786" path="/var/lib/kubelet/pods/5ed22249-c992-4d01-a0ec-110a1ff4f786/volumes" Jan 26 17:15:35 crc kubenswrapper[4856]: I0126 17:15:35.578361 4856 generic.go:334] "Generic (PLEG): container finished" podID="a2268784-2c29-45fa-8bbc-4426f4c566b6" containerID="f1c91d37a26cde48c83a1293d17ffc8b332fdc2b1d02b20144bd258945a7c047" exitCode=0 Jan 26 17:15:35 crc kubenswrapper[4856]: I0126 17:15:35.578477 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"a2268784-2c29-45fa-8bbc-4426f4c566b6","Type":"ContainerDied","Data":"f1c91d37a26cde48c83a1293d17ffc8b332fdc2b1d02b20144bd258945a7c047"} Jan 26 17:15:36 crc kubenswrapper[4856]: I0126 17:15:36.591086 4856 generic.go:334] "Generic (PLEG): container finished" podID="a2268784-2c29-45fa-8bbc-4426f4c566b6" containerID="78d8d20e1d5fb801d43742459729ca3548bb6acf3b8d3566e927a42c56e4febf" exitCode=0 Jan 26 17:15:36 crc kubenswrapper[4856]: I0126 17:15:36.591237 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"a2268784-2c29-45fa-8bbc-4426f4c566b6","Type":"ContainerDied","Data":"78d8d20e1d5fb801d43742459729ca3548bb6acf3b8d3566e927a42c56e4febf"} Jan 26 17:15:36 crc kubenswrapper[4856]: I0126 17:15:36.640847 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_a2268784-2c29-45fa-8bbc-4426f4c566b6/manage-dockerfile/0.log" Jan 26 17:15:37 crc kubenswrapper[4856]: I0126 17:15:37.607179 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"a2268784-2c29-45fa-8bbc-4426f4c566b6","Type":"ContainerStarted","Data":"bace1809b288c35ceefa0274d2e72a4e0c2a1f126e377b5ce2ae65e151a5a665"} Jan 26 17:15:37 crc kubenswrapper[4856]: I0126 17:15:37.641829 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-2-build" podStartSLOduration=10.641802646 podStartE2EDuration="10.641802646s" podCreationTimestamp="2026-01-26 17:15:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:15:37.639903182 +0000 UTC m=+1033.593157163" watchObservedRunningTime="2026-01-26 17:15:37.641802646 +0000 UTC m=+1033.595056627" Jan 26 17:15:56 crc kubenswrapper[4856]: I0126 17:15:56.939447 4856 patch_prober.go:28] interesting pod/machine-config-daemon-xm9cq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:15:56 crc kubenswrapper[4856]: I0126 17:15:56.940153 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:16:19 crc kubenswrapper[4856]: I0126 17:16:19.354122 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bz275"] Jan 26 17:16:19 crc kubenswrapper[4856]: E0126 17:16:19.356022 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ed22249-c992-4d01-a0ec-110a1ff4f786" containerName="docker-build" Jan 26 17:16:19 crc kubenswrapper[4856]: I0126 17:16:19.356165 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ed22249-c992-4d01-a0ec-110a1ff4f786" containerName="docker-build" Jan 26 17:16:19 crc kubenswrapper[4856]: E0126 17:16:19.356192 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ed22249-c992-4d01-a0ec-110a1ff4f786" containerName="manage-dockerfile" Jan 26 17:16:19 crc kubenswrapper[4856]: I0126 17:16:19.356201 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ed22249-c992-4d01-a0ec-110a1ff4f786" containerName="manage-dockerfile" Jan 26 17:16:19 crc kubenswrapper[4856]: I0126 17:16:19.357417 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ed22249-c992-4d01-a0ec-110a1ff4f786" containerName="docker-build" Jan 26 17:16:19 crc kubenswrapper[4856]: I0126 17:16:19.361241 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bz275"] Jan 26 17:16:19 crc kubenswrapper[4856]: I0126 17:16:19.362608 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bz275" Jan 26 17:16:19 crc kubenswrapper[4856]: I0126 17:16:19.504830 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqldd\" (UniqueName: \"kubernetes.io/projected/4dfaabcd-3733-443e-8a38-060ad2953eec-kube-api-access-tqldd\") pod \"certified-operators-bz275\" (UID: \"4dfaabcd-3733-443e-8a38-060ad2953eec\") " pod="openshift-marketplace/certified-operators-bz275" Jan 26 17:16:19 crc kubenswrapper[4856]: I0126 17:16:19.504909 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dfaabcd-3733-443e-8a38-060ad2953eec-utilities\") pod \"certified-operators-bz275\" (UID: \"4dfaabcd-3733-443e-8a38-060ad2953eec\") " pod="openshift-marketplace/certified-operators-bz275" Jan 26 17:16:19 crc kubenswrapper[4856]: I0126 17:16:19.504987 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dfaabcd-3733-443e-8a38-060ad2953eec-catalog-content\") pod \"certified-operators-bz275\" (UID: \"4dfaabcd-3733-443e-8a38-060ad2953eec\") " pod="openshift-marketplace/certified-operators-bz275" Jan 26 17:16:19 crc kubenswrapper[4856]: I0126 17:16:19.606595 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqldd\" (UniqueName: \"kubernetes.io/projected/4dfaabcd-3733-443e-8a38-060ad2953eec-kube-api-access-tqldd\") pod \"certified-operators-bz275\" (UID: \"4dfaabcd-3733-443e-8a38-060ad2953eec\") " pod="openshift-marketplace/certified-operators-bz275" Jan 26 17:16:19 crc kubenswrapper[4856]: I0126 17:16:19.606695 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dfaabcd-3733-443e-8a38-060ad2953eec-utilities\") pod \"certified-operators-bz275\" (UID: \"4dfaabcd-3733-443e-8a38-060ad2953eec\") " pod="openshift-marketplace/certified-operators-bz275" Jan 26 17:16:19 crc kubenswrapper[4856]: I0126 17:16:19.606747 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dfaabcd-3733-443e-8a38-060ad2953eec-catalog-content\") pod \"certified-operators-bz275\" (UID: \"4dfaabcd-3733-443e-8a38-060ad2953eec\") " pod="openshift-marketplace/certified-operators-bz275" Jan 26 17:16:19 crc kubenswrapper[4856]: I0126 17:16:19.607412 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dfaabcd-3733-443e-8a38-060ad2953eec-catalog-content\") pod \"certified-operators-bz275\" (UID: \"4dfaabcd-3733-443e-8a38-060ad2953eec\") " pod="openshift-marketplace/certified-operators-bz275" Jan 26 17:16:19 crc kubenswrapper[4856]: I0126 17:16:19.607407 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dfaabcd-3733-443e-8a38-060ad2953eec-utilities\") pod \"certified-operators-bz275\" (UID: \"4dfaabcd-3733-443e-8a38-060ad2953eec\") " pod="openshift-marketplace/certified-operators-bz275" Jan 26 17:16:19 crc kubenswrapper[4856]: I0126 17:16:19.631079 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqldd\" (UniqueName: \"kubernetes.io/projected/4dfaabcd-3733-443e-8a38-060ad2953eec-kube-api-access-tqldd\") pod \"certified-operators-bz275\" (UID: \"4dfaabcd-3733-443e-8a38-060ad2953eec\") " pod="openshift-marketplace/certified-operators-bz275" Jan 26 17:16:19 crc kubenswrapper[4856]: I0126 17:16:19.684515 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bz275" Jan 26 17:16:20 crc kubenswrapper[4856]: I0126 17:16:20.011199 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bz275"] Jan 26 17:16:20 crc kubenswrapper[4856]: I0126 17:16:20.349068 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bz275" event={"ID":"4dfaabcd-3733-443e-8a38-060ad2953eec","Type":"ContainerStarted","Data":"90b10b1ad76d4b052f858ba03b4cb3df4bcd4b4057edd943836d778d0526b08f"} Jan 26 17:16:24 crc kubenswrapper[4856]: I0126 17:16:24.390168 4856 generic.go:334] "Generic (PLEG): container finished" podID="4dfaabcd-3733-443e-8a38-060ad2953eec" containerID="ea9dd6ca56fab3dd3d36f63cf819d952d1b23acac4807908175e381c3bc121d4" exitCode=0 Jan 26 17:16:24 crc kubenswrapper[4856]: I0126 17:16:24.390217 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bz275" event={"ID":"4dfaabcd-3733-443e-8a38-060ad2953eec","Type":"ContainerDied","Data":"ea9dd6ca56fab3dd3d36f63cf819d952d1b23acac4807908175e381c3bc121d4"} Jan 26 17:16:24 crc kubenswrapper[4856]: I0126 17:16:24.392313 4856 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 17:16:26 crc kubenswrapper[4856]: I0126 17:16:26.406359 4856 generic.go:334] "Generic (PLEG): container finished" podID="4dfaabcd-3733-443e-8a38-060ad2953eec" containerID="4a6f6ebaf882577a384aa190207606f891d548d42a3382b291a7ded23f0cec89" exitCode=0 Jan 26 17:16:26 crc kubenswrapper[4856]: I0126 17:16:26.406538 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bz275" event={"ID":"4dfaabcd-3733-443e-8a38-060ad2953eec","Type":"ContainerDied","Data":"4a6f6ebaf882577a384aa190207606f891d548d42a3382b291a7ded23f0cec89"} Jan 26 17:16:26 crc kubenswrapper[4856]: I0126 17:16:26.939648 4856 patch_prober.go:28] interesting pod/machine-config-daemon-xm9cq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:16:26 crc kubenswrapper[4856]: I0126 17:16:26.939850 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:16:26 crc kubenswrapper[4856]: I0126 17:16:26.939983 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" Jan 26 17:16:26 crc kubenswrapper[4856]: I0126 17:16:26.941340 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fdaad4602089daad40b0395fbc761e615a8ba2a94c8f5b977142a787034cddb7"} pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 17:16:26 crc kubenswrapper[4856]: I0126 17:16:26.941476 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" containerName="machine-config-daemon" containerID="cri-o://fdaad4602089daad40b0395fbc761e615a8ba2a94c8f5b977142a787034cddb7" gracePeriod=600 Jan 26 17:16:27 crc kubenswrapper[4856]: I0126 17:16:27.415159 4856 generic.go:334] "Generic (PLEG): container finished" podID="63c75ede-5170-4db0-811b-5217ef8d72b3" containerID="fdaad4602089daad40b0395fbc761e615a8ba2a94c8f5b977142a787034cddb7" exitCode=0 Jan 26 17:16:27 crc kubenswrapper[4856]: I0126 17:16:27.415270 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" event={"ID":"63c75ede-5170-4db0-811b-5217ef8d72b3","Type":"ContainerDied","Data":"fdaad4602089daad40b0395fbc761e615a8ba2a94c8f5b977142a787034cddb7"} Jan 26 17:16:27 crc kubenswrapper[4856]: I0126 17:16:27.415666 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" event={"ID":"63c75ede-5170-4db0-811b-5217ef8d72b3","Type":"ContainerStarted","Data":"5846ab4d870be5fcbab796c3e27690d2c13d129480d6fcd21b3b0d1c535f0cff"} Jan 26 17:16:27 crc kubenswrapper[4856]: I0126 17:16:27.415708 4856 scope.go:117] "RemoveContainer" containerID="bb3fb578d0ea2b4eb264b402043faa4d1923f5d38749a2ee2c65b084c2e291bd" Jan 26 17:16:27 crc kubenswrapper[4856]: I0126 17:16:27.418348 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bz275" event={"ID":"4dfaabcd-3733-443e-8a38-060ad2953eec","Type":"ContainerStarted","Data":"6b1c37d94c31c69c26080700f0bce3069c7c6f0ae57f4ffc06c5518d82c24f5f"} Jan 26 17:16:27 crc kubenswrapper[4856]: I0126 17:16:27.451098 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bz275" podStartSLOduration=5.764174881 podStartE2EDuration="8.451052735s" podCreationTimestamp="2026-01-26 17:16:19 +0000 UTC" firstStartedPulling="2026-01-26 17:16:24.391940264 +0000 UTC m=+1080.345194245" lastFinishedPulling="2026-01-26 17:16:27.078818108 +0000 UTC m=+1083.032072099" observedRunningTime="2026-01-26 17:16:27.449376447 +0000 UTC m=+1083.402630438" watchObservedRunningTime="2026-01-26 17:16:27.451052735 +0000 UTC m=+1083.404306716" Jan 26 17:16:29 crc kubenswrapper[4856]: I0126 17:16:29.684816 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bz275" Jan 26 17:16:29 crc kubenswrapper[4856]: I0126 17:16:29.685500 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bz275" Jan 26 17:16:29 crc kubenswrapper[4856]: I0126 17:16:29.736224 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bz275" Jan 26 17:16:39 crc kubenswrapper[4856]: I0126 17:16:39.733600 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bz275" Jan 26 17:16:40 crc kubenswrapper[4856]: I0126 17:16:40.693620 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bz275"] Jan 26 17:16:40 crc kubenswrapper[4856]: I0126 17:16:40.694102 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bz275" podUID="4dfaabcd-3733-443e-8a38-060ad2953eec" containerName="registry-server" containerID="cri-o://6b1c37d94c31c69c26080700f0bce3069c7c6f0ae57f4ffc06c5518d82c24f5f" gracePeriod=2 Jan 26 17:16:43 crc kubenswrapper[4856]: I0126 17:16:43.554686 4856 generic.go:334] "Generic (PLEG): container finished" podID="4dfaabcd-3733-443e-8a38-060ad2953eec" containerID="6b1c37d94c31c69c26080700f0bce3069c7c6f0ae57f4ffc06c5518d82c24f5f" exitCode=0 Jan 26 17:16:43 crc kubenswrapper[4856]: I0126 17:16:43.554771 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bz275" event={"ID":"4dfaabcd-3733-443e-8a38-060ad2953eec","Type":"ContainerDied","Data":"6b1c37d94c31c69c26080700f0bce3069c7c6f0ae57f4ffc06c5518d82c24f5f"} Jan 26 17:16:43 crc kubenswrapper[4856]: I0126 17:16:43.805909 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bz275" Jan 26 17:16:43 crc kubenswrapper[4856]: I0126 17:16:43.972734 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dfaabcd-3733-443e-8a38-060ad2953eec-catalog-content\") pod \"4dfaabcd-3733-443e-8a38-060ad2953eec\" (UID: \"4dfaabcd-3733-443e-8a38-060ad2953eec\") " Jan 26 17:16:43 crc kubenswrapper[4856]: I0126 17:16:43.973035 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqldd\" (UniqueName: \"kubernetes.io/projected/4dfaabcd-3733-443e-8a38-060ad2953eec-kube-api-access-tqldd\") pod \"4dfaabcd-3733-443e-8a38-060ad2953eec\" (UID: \"4dfaabcd-3733-443e-8a38-060ad2953eec\") " Jan 26 17:16:43 crc kubenswrapper[4856]: I0126 17:16:43.973136 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dfaabcd-3733-443e-8a38-060ad2953eec-utilities\") pod \"4dfaabcd-3733-443e-8a38-060ad2953eec\" (UID: \"4dfaabcd-3733-443e-8a38-060ad2953eec\") " Jan 26 17:16:43 crc kubenswrapper[4856]: I0126 17:16:43.974293 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dfaabcd-3733-443e-8a38-060ad2953eec-utilities" (OuterVolumeSpecName: "utilities") pod "4dfaabcd-3733-443e-8a38-060ad2953eec" (UID: "4dfaabcd-3733-443e-8a38-060ad2953eec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:16:43 crc kubenswrapper[4856]: I0126 17:16:43.980259 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dfaabcd-3733-443e-8a38-060ad2953eec-kube-api-access-tqldd" (OuterVolumeSpecName: "kube-api-access-tqldd") pod "4dfaabcd-3733-443e-8a38-060ad2953eec" (UID: "4dfaabcd-3733-443e-8a38-060ad2953eec"). InnerVolumeSpecName "kube-api-access-tqldd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:16:44 crc kubenswrapper[4856]: I0126 17:16:44.021267 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dfaabcd-3733-443e-8a38-060ad2953eec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4dfaabcd-3733-443e-8a38-060ad2953eec" (UID: "4dfaabcd-3733-443e-8a38-060ad2953eec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:16:44 crc kubenswrapper[4856]: I0126 17:16:44.074844 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dfaabcd-3733-443e-8a38-060ad2953eec-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:16:44 crc kubenswrapper[4856]: I0126 17:16:44.074892 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dfaabcd-3733-443e-8a38-060ad2953eec-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:16:44 crc kubenswrapper[4856]: I0126 17:16:44.074908 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqldd\" (UniqueName: \"kubernetes.io/projected/4dfaabcd-3733-443e-8a38-060ad2953eec-kube-api-access-tqldd\") on node \"crc\" DevicePath \"\"" Jan 26 17:16:44 crc kubenswrapper[4856]: I0126 17:16:44.564163 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bz275" event={"ID":"4dfaabcd-3733-443e-8a38-060ad2953eec","Type":"ContainerDied","Data":"90b10b1ad76d4b052f858ba03b4cb3df4bcd4b4057edd943836d778d0526b08f"} Jan 26 17:16:44 crc kubenswrapper[4856]: I0126 17:16:44.564426 4856 scope.go:117] "RemoveContainer" containerID="6b1c37d94c31c69c26080700f0bce3069c7c6f0ae57f4ffc06c5518d82c24f5f" Jan 26 17:16:44 crc kubenswrapper[4856]: I0126 17:16:44.564246 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bz275" Jan 26 17:16:44 crc kubenswrapper[4856]: I0126 17:16:44.584987 4856 scope.go:117] "RemoveContainer" containerID="4a6f6ebaf882577a384aa190207606f891d548d42a3382b291a7ded23f0cec89" Jan 26 17:16:44 crc kubenswrapper[4856]: I0126 17:16:44.594631 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bz275"] Jan 26 17:16:44 crc kubenswrapper[4856]: I0126 17:16:44.607373 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bz275"] Jan 26 17:16:44 crc kubenswrapper[4856]: I0126 17:16:44.616119 4856 scope.go:117] "RemoveContainer" containerID="ea9dd6ca56fab3dd3d36f63cf819d952d1b23acac4807908175e381c3bc121d4" Jan 26 17:16:45 crc kubenswrapper[4856]: I0126 17:16:45.404002 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dfaabcd-3733-443e-8a38-060ad2953eec" path="/var/lib/kubelet/pods/4dfaabcd-3733-443e-8a38-060ad2953eec/volumes" Jan 26 17:17:02 crc kubenswrapper[4856]: I0126 17:17:02.720485 4856 generic.go:334] "Generic (PLEG): container finished" podID="a2268784-2c29-45fa-8bbc-4426f4c566b6" containerID="bace1809b288c35ceefa0274d2e72a4e0c2a1f126e377b5ce2ae65e151a5a665" exitCode=0 Jan 26 17:17:02 crc kubenswrapper[4856]: I0126 17:17:02.720835 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"a2268784-2c29-45fa-8bbc-4426f4c566b6","Type":"ContainerDied","Data":"bace1809b288c35ceefa0274d2e72a4e0c2a1f126e377b5ce2ae65e151a5a665"} Jan 26 17:17:04 crc kubenswrapper[4856]: I0126 17:17:04.043279 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 26 17:17:04 crc kubenswrapper[4856]: I0126 17:17:04.175855 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a2268784-2c29-45fa-8bbc-4426f4c566b6-container-storage-root\") pod \"a2268784-2c29-45fa-8bbc-4426f4c566b6\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " Jan 26 17:17:04 crc kubenswrapper[4856]: I0126 17:17:04.175961 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a2268784-2c29-45fa-8bbc-4426f4c566b6-buildcachedir\") pod \"a2268784-2c29-45fa-8bbc-4426f4c566b6\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " Jan 26 17:17:04 crc kubenswrapper[4856]: I0126 17:17:04.176009 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a2268784-2c29-45fa-8bbc-4426f4c566b6-build-blob-cache\") pod \"a2268784-2c29-45fa-8bbc-4426f4c566b6\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " Jan 26 17:17:04 crc kubenswrapper[4856]: I0126 17:17:04.176051 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a2268784-2c29-45fa-8bbc-4426f4c566b6-build-ca-bundles\") pod \"a2268784-2c29-45fa-8bbc-4426f4c566b6\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " Jan 26 17:17:04 crc kubenswrapper[4856]: I0126 17:17:04.176074 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2268784-2c29-45fa-8bbc-4426f4c566b6-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "a2268784-2c29-45fa-8bbc-4426f4c566b6" (UID: "a2268784-2c29-45fa-8bbc-4426f4c566b6"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:17:04 crc kubenswrapper[4856]: I0126 17:17:04.176080 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a2268784-2c29-45fa-8bbc-4426f4c566b6-node-pullsecrets\") pod \"a2268784-2c29-45fa-8bbc-4426f4c566b6\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " Jan 26 17:17:04 crc kubenswrapper[4856]: I0126 17:17:04.176105 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2268784-2c29-45fa-8bbc-4426f4c566b6-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "a2268784-2c29-45fa-8bbc-4426f4c566b6" (UID: "a2268784-2c29-45fa-8bbc-4426f4c566b6"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:17:04 crc kubenswrapper[4856]: I0126 17:17:04.176137 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/a2268784-2c29-45fa-8bbc-4426f4c566b6-builder-dockercfg-8h4xs-pull\") pod \"a2268784-2c29-45fa-8bbc-4426f4c566b6\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " Jan 26 17:17:04 crc kubenswrapper[4856]: I0126 17:17:04.176163 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a2268784-2c29-45fa-8bbc-4426f4c566b6-build-proxy-ca-bundles\") pod \"a2268784-2c29-45fa-8bbc-4426f4c566b6\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " Jan 26 17:17:04 crc kubenswrapper[4856]: I0126 17:17:04.176196 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a2268784-2c29-45fa-8bbc-4426f4c566b6-container-storage-run\") pod \"a2268784-2c29-45fa-8bbc-4426f4c566b6\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " Jan 26 17:17:04 crc kubenswrapper[4856]: I0126 17:17:04.176232 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/a2268784-2c29-45fa-8bbc-4426f4c566b6-builder-dockercfg-8h4xs-push\") pod \"a2268784-2c29-45fa-8bbc-4426f4c566b6\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " Jan 26 17:17:04 crc kubenswrapper[4856]: I0126 17:17:04.176254 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a2268784-2c29-45fa-8bbc-4426f4c566b6-build-system-configs\") pod \"a2268784-2c29-45fa-8bbc-4426f4c566b6\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " Jan 26 17:17:04 crc kubenswrapper[4856]: I0126 17:17:04.176275 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hs6s\" (UniqueName: \"kubernetes.io/projected/a2268784-2c29-45fa-8bbc-4426f4c566b6-kube-api-access-7hs6s\") pod \"a2268784-2c29-45fa-8bbc-4426f4c566b6\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " Jan 26 17:17:04 crc kubenswrapper[4856]: I0126 17:17:04.176310 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a2268784-2c29-45fa-8bbc-4426f4c566b6-buildworkdir\") pod \"a2268784-2c29-45fa-8bbc-4426f4c566b6\" (UID: \"a2268784-2c29-45fa-8bbc-4426f4c566b6\") " Jan 26 17:17:04 crc kubenswrapper[4856]: I0126 17:17:04.176623 4856 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a2268784-2c29-45fa-8bbc-4426f4c566b6-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 26 17:17:04 crc kubenswrapper[4856]: I0126 17:17:04.176644 4856 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a2268784-2c29-45fa-8bbc-4426f4c566b6-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 26 17:17:04 crc kubenswrapper[4856]: I0126 17:17:04.177388 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2268784-2c29-45fa-8bbc-4426f4c566b6-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "a2268784-2c29-45fa-8bbc-4426f4c566b6" (UID: "a2268784-2c29-45fa-8bbc-4426f4c566b6"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:17:04 crc kubenswrapper[4856]: I0126 17:17:04.177781 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2268784-2c29-45fa-8bbc-4426f4c566b6-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "a2268784-2c29-45fa-8bbc-4426f4c566b6" (UID: "a2268784-2c29-45fa-8bbc-4426f4c566b6"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:17:04 crc kubenswrapper[4856]: I0126 17:17:04.177839 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2268784-2c29-45fa-8bbc-4426f4c566b6-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "a2268784-2c29-45fa-8bbc-4426f4c566b6" (UID: "a2268784-2c29-45fa-8bbc-4426f4c566b6"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:17:04 crc kubenswrapper[4856]: I0126 17:17:04.178727 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2268784-2c29-45fa-8bbc-4426f4c566b6-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "a2268784-2c29-45fa-8bbc-4426f4c566b6" (UID: "a2268784-2c29-45fa-8bbc-4426f4c566b6"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:17:04 crc kubenswrapper[4856]: I0126 17:17:04.182246 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2268784-2c29-45fa-8bbc-4426f4c566b6-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "a2268784-2c29-45fa-8bbc-4426f4c566b6" (UID: "a2268784-2c29-45fa-8bbc-4426f4c566b6"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:17:04 crc kubenswrapper[4856]: I0126 17:17:04.183783 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2268784-2c29-45fa-8bbc-4426f4c566b6-kube-api-access-7hs6s" (OuterVolumeSpecName: "kube-api-access-7hs6s") pod "a2268784-2c29-45fa-8bbc-4426f4c566b6" (UID: "a2268784-2c29-45fa-8bbc-4426f4c566b6"). InnerVolumeSpecName "kube-api-access-7hs6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:17:04 crc kubenswrapper[4856]: I0126 17:17:04.184243 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2268784-2c29-45fa-8bbc-4426f4c566b6-builder-dockercfg-8h4xs-push" (OuterVolumeSpecName: "builder-dockercfg-8h4xs-push") pod "a2268784-2c29-45fa-8bbc-4426f4c566b6" (UID: "a2268784-2c29-45fa-8bbc-4426f4c566b6"). InnerVolumeSpecName "builder-dockercfg-8h4xs-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:17:04 crc kubenswrapper[4856]: I0126 17:17:04.185948 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2268784-2c29-45fa-8bbc-4426f4c566b6-builder-dockercfg-8h4xs-pull" (OuterVolumeSpecName: "builder-dockercfg-8h4xs-pull") pod "a2268784-2c29-45fa-8bbc-4426f4c566b6" (UID: "a2268784-2c29-45fa-8bbc-4426f4c566b6"). InnerVolumeSpecName "builder-dockercfg-8h4xs-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:17:04 crc kubenswrapper[4856]: I0126 17:17:04.278396 4856 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a2268784-2c29-45fa-8bbc-4426f4c566b6-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 17:17:04 crc kubenswrapper[4856]: I0126 17:17:04.278434 4856 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/a2268784-2c29-45fa-8bbc-4426f4c566b6-builder-dockercfg-8h4xs-pull\") on node \"crc\" DevicePath \"\"" Jan 26 17:17:04 crc kubenswrapper[4856]: I0126 17:17:04.278448 4856 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a2268784-2c29-45fa-8bbc-4426f4c566b6-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 17:17:04 crc kubenswrapper[4856]: I0126 17:17:04.278458 4856 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a2268784-2c29-45fa-8bbc-4426f4c566b6-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 26 17:17:04 crc kubenswrapper[4856]: I0126 17:17:04.278470 4856 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/a2268784-2c29-45fa-8bbc-4426f4c566b6-builder-dockercfg-8h4xs-push\") on node \"crc\" DevicePath \"\"" Jan 26 17:17:04 crc kubenswrapper[4856]: I0126 17:17:04.278480 4856 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a2268784-2c29-45fa-8bbc-4426f4c566b6-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 26 17:17:04 crc kubenswrapper[4856]: I0126 17:17:04.278489 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hs6s\" (UniqueName: \"kubernetes.io/projected/a2268784-2c29-45fa-8bbc-4426f4c566b6-kube-api-access-7hs6s\") on node \"crc\" DevicePath \"\"" Jan 26 17:17:04 crc kubenswrapper[4856]: I0126 17:17:04.278499 4856 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a2268784-2c29-45fa-8bbc-4426f4c566b6-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 26 17:17:04 crc kubenswrapper[4856]: I0126 17:17:04.416831 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2268784-2c29-45fa-8bbc-4426f4c566b6-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "a2268784-2c29-45fa-8bbc-4426f4c566b6" (UID: "a2268784-2c29-45fa-8bbc-4426f4c566b6"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:17:04 crc kubenswrapper[4856]: I0126 17:17:04.482018 4856 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a2268784-2c29-45fa-8bbc-4426f4c566b6-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 26 17:17:04 crc kubenswrapper[4856]: I0126 17:17:04.735544 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"a2268784-2c29-45fa-8bbc-4426f4c566b6","Type":"ContainerDied","Data":"06f2ce65083300356c56ba4f8a7f06492d9d84894e9cf8a9d78cb9fbe7bdcb6c"} Jan 26 17:17:04 crc kubenswrapper[4856]: I0126 17:17:04.735608 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06f2ce65083300356c56ba4f8a7f06492d9d84894e9cf8a9d78cb9fbe7bdcb6c" Jan 26 17:17:04 crc kubenswrapper[4856]: I0126 17:17:04.735650 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 26 17:17:06 crc kubenswrapper[4856]: I0126 17:17:06.236577 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2268784-2c29-45fa-8bbc-4426f4c566b6-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "a2268784-2c29-45fa-8bbc-4426f4c566b6" (UID: "a2268784-2c29-45fa-8bbc-4426f4c566b6"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:17:06 crc kubenswrapper[4856]: I0126 17:17:06.309232 4856 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a2268784-2c29-45fa-8bbc-4426f4c566b6-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.003315 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 26 17:17:09 crc kubenswrapper[4856]: E0126 17:17:09.004076 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dfaabcd-3733-443e-8a38-060ad2953eec" containerName="extract-content" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.004097 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dfaabcd-3733-443e-8a38-060ad2953eec" containerName="extract-content" Jan 26 17:17:09 crc kubenswrapper[4856]: E0126 17:17:09.004115 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2268784-2c29-45fa-8bbc-4426f4c566b6" containerName="git-clone" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.004125 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2268784-2c29-45fa-8bbc-4426f4c566b6" containerName="git-clone" Jan 26 17:17:09 crc kubenswrapper[4856]: E0126 17:17:09.004142 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dfaabcd-3733-443e-8a38-060ad2953eec" containerName="registry-server" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.004152 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dfaabcd-3733-443e-8a38-060ad2953eec" containerName="registry-server" Jan 26 17:17:09 crc kubenswrapper[4856]: E0126 17:17:09.004169 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dfaabcd-3733-443e-8a38-060ad2953eec" containerName="extract-utilities" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.004179 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dfaabcd-3733-443e-8a38-060ad2953eec" containerName="extract-utilities" Jan 26 17:17:09 crc kubenswrapper[4856]: E0126 17:17:09.004201 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2268784-2c29-45fa-8bbc-4426f4c566b6" containerName="docker-build" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.004211 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2268784-2c29-45fa-8bbc-4426f4c566b6" containerName="docker-build" Jan 26 17:17:09 crc kubenswrapper[4856]: E0126 17:17:09.004227 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2268784-2c29-45fa-8bbc-4426f4c566b6" containerName="manage-dockerfile" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.004237 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2268784-2c29-45fa-8bbc-4426f4c566b6" containerName="manage-dockerfile" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.004398 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dfaabcd-3733-443e-8a38-060ad2953eec" containerName="registry-server" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.004423 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2268784-2c29-45fa-8bbc-4426f4c566b6" containerName="docker-build" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.005404 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.008872 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-ca" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.008888 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-sys-config" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.009236 4856 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-8h4xs" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.011709 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-global-ca" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.029436 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.154654 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/845832d9-625c-452e-b900-4e3c2df2ef4d-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " pod="service-telemetry/sg-core-1-build" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.155195 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/845832d9-625c-452e-b900-4e3c2df2ef4d-buildcachedir\") pod \"sg-core-1-build\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " pod="service-telemetry/sg-core-1-build" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.155335 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/845832d9-625c-452e-b900-4e3c2df2ef4d-container-storage-root\") pod \"sg-core-1-build\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " pod="service-telemetry/sg-core-1-build" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.155588 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/845832d9-625c-452e-b900-4e3c2df2ef4d-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " pod="service-telemetry/sg-core-1-build" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.155832 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/845832d9-625c-452e-b900-4e3c2df2ef4d-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " pod="service-telemetry/sg-core-1-build" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.155974 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/845832d9-625c-452e-b900-4e3c2df2ef4d-builder-dockercfg-8h4xs-push\") pod \"sg-core-1-build\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " pod="service-telemetry/sg-core-1-build" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.156031 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/845832d9-625c-452e-b900-4e3c2df2ef4d-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " pod="service-telemetry/sg-core-1-build" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.156049 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/845832d9-625c-452e-b900-4e3c2df2ef4d-container-storage-run\") pod \"sg-core-1-build\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " pod="service-telemetry/sg-core-1-build" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.156082 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/845832d9-625c-452e-b900-4e3c2df2ef4d-buildworkdir\") pod \"sg-core-1-build\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " pod="service-telemetry/sg-core-1-build" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.156100 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/845832d9-625c-452e-b900-4e3c2df2ef4d-builder-dockercfg-8h4xs-pull\") pod \"sg-core-1-build\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " pod="service-telemetry/sg-core-1-build" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.156341 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/845832d9-625c-452e-b900-4e3c2df2ef4d-build-system-configs\") pod \"sg-core-1-build\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " pod="service-telemetry/sg-core-1-build" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.156381 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7glkd\" (UniqueName: \"kubernetes.io/projected/845832d9-625c-452e-b900-4e3c2df2ef4d-kube-api-access-7glkd\") pod \"sg-core-1-build\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " pod="service-telemetry/sg-core-1-build" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.258045 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/845832d9-625c-452e-b900-4e3c2df2ef4d-build-system-configs\") pod \"sg-core-1-build\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " pod="service-telemetry/sg-core-1-build" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.258089 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7glkd\" (UniqueName: \"kubernetes.io/projected/845832d9-625c-452e-b900-4e3c2df2ef4d-kube-api-access-7glkd\") pod \"sg-core-1-build\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " pod="service-telemetry/sg-core-1-build" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.258130 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/845832d9-625c-452e-b900-4e3c2df2ef4d-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " pod="service-telemetry/sg-core-1-build" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.258146 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/845832d9-625c-452e-b900-4e3c2df2ef4d-buildcachedir\") pod \"sg-core-1-build\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " pod="service-telemetry/sg-core-1-build" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.258164 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/845832d9-625c-452e-b900-4e3c2df2ef4d-container-storage-root\") pod \"sg-core-1-build\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " pod="service-telemetry/sg-core-1-build" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.258183 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/845832d9-625c-452e-b900-4e3c2df2ef4d-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " pod="service-telemetry/sg-core-1-build" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.258207 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/845832d9-625c-452e-b900-4e3c2df2ef4d-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " pod="service-telemetry/sg-core-1-build" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.258244 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/845832d9-625c-452e-b900-4e3c2df2ef4d-builder-dockercfg-8h4xs-push\") pod \"sg-core-1-build\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " pod="service-telemetry/sg-core-1-build" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.258266 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/845832d9-625c-452e-b900-4e3c2df2ef4d-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " pod="service-telemetry/sg-core-1-build" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.258280 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/845832d9-625c-452e-b900-4e3c2df2ef4d-container-storage-run\") pod \"sg-core-1-build\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " pod="service-telemetry/sg-core-1-build" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.258298 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/845832d9-625c-452e-b900-4e3c2df2ef4d-buildworkdir\") pod \"sg-core-1-build\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " pod="service-telemetry/sg-core-1-build" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.258311 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/845832d9-625c-452e-b900-4e3c2df2ef4d-builder-dockercfg-8h4xs-pull\") pod \"sg-core-1-build\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " pod="service-telemetry/sg-core-1-build" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.258791 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/845832d9-625c-452e-b900-4e3c2df2ef4d-build-system-configs\") pod \"sg-core-1-build\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " pod="service-telemetry/sg-core-1-build" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.258822 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/845832d9-625c-452e-b900-4e3c2df2ef4d-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " pod="service-telemetry/sg-core-1-build" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.258978 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/845832d9-625c-452e-b900-4e3c2df2ef4d-container-storage-root\") pod \"sg-core-1-build\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " pod="service-telemetry/sg-core-1-build" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.259043 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/845832d9-625c-452e-b900-4e3c2df2ef4d-container-storage-run\") pod \"sg-core-1-build\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " pod="service-telemetry/sg-core-1-build" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.259044 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/845832d9-625c-452e-b900-4e3c2df2ef4d-buildcachedir\") pod \"sg-core-1-build\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " pod="service-telemetry/sg-core-1-build" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.259224 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/845832d9-625c-452e-b900-4e3c2df2ef4d-buildworkdir\") pod \"sg-core-1-build\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " pod="service-telemetry/sg-core-1-build" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.259703 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/845832d9-625c-452e-b900-4e3c2df2ef4d-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " pod="service-telemetry/sg-core-1-build" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.259964 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/845832d9-625c-452e-b900-4e3c2df2ef4d-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " pod="service-telemetry/sg-core-1-build" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.259972 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/845832d9-625c-452e-b900-4e3c2df2ef4d-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " pod="service-telemetry/sg-core-1-build" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.265567 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/845832d9-625c-452e-b900-4e3c2df2ef4d-builder-dockercfg-8h4xs-push\") pod \"sg-core-1-build\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " pod="service-telemetry/sg-core-1-build" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.270192 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/845832d9-625c-452e-b900-4e3c2df2ef4d-builder-dockercfg-8h4xs-pull\") pod \"sg-core-1-build\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " pod="service-telemetry/sg-core-1-build" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.282477 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7glkd\" (UniqueName: \"kubernetes.io/projected/845832d9-625c-452e-b900-4e3c2df2ef4d-kube-api-access-7glkd\") pod \"sg-core-1-build\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " pod="service-telemetry/sg-core-1-build" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.330047 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.739282 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 26 17:17:09 crc kubenswrapper[4856]: I0126 17:17:09.782645 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"845832d9-625c-452e-b900-4e3c2df2ef4d","Type":"ContainerStarted","Data":"37f3f15c961670bf793cb2d570bf81f5b90a0fceb924638d55df34f9da6e88b4"} Jan 26 17:17:10 crc kubenswrapper[4856]: I0126 17:17:10.790403 4856 generic.go:334] "Generic (PLEG): container finished" podID="845832d9-625c-452e-b900-4e3c2df2ef4d" containerID="86acff0987e388de3d9ce01fcd9e358d21cc1ecfc613874a4c26f37c1a31ec0b" exitCode=0 Jan 26 17:17:10 crc kubenswrapper[4856]: I0126 17:17:10.790557 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"845832d9-625c-452e-b900-4e3c2df2ef4d","Type":"ContainerDied","Data":"86acff0987e388de3d9ce01fcd9e358d21cc1ecfc613874a4c26f37c1a31ec0b"} Jan 26 17:17:11 crc kubenswrapper[4856]: I0126 17:17:11.799899 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"845832d9-625c-452e-b900-4e3c2df2ef4d","Type":"ContainerStarted","Data":"00702564f2498cf2bf404ba97ad4e9284443fddbcee3e0fe6e8490fc7acce163"} Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.192803 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-1-build" podStartSLOduration=11.192765158 podStartE2EDuration="11.192765158s" podCreationTimestamp="2026-01-26 17:17:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:17:11.828766028 +0000 UTC m=+1127.782020009" watchObservedRunningTime="2026-01-26 17:17:19.192765158 +0000 UTC m=+1135.146019139" Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.197372 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.197701 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/sg-core-1-build" podUID="845832d9-625c-452e-b900-4e3c2df2ef4d" containerName="docker-build" containerID="cri-o://00702564f2498cf2bf404ba97ad4e9284443fddbcee3e0fe6e8490fc7acce163" gracePeriod=30 Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.617418 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_845832d9-625c-452e-b900-4e3c2df2ef4d/docker-build/0.log" Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.618476 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.875654 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/845832d9-625c-452e-b900-4e3c2df2ef4d-buildcachedir\") pod \"845832d9-625c-452e-b900-4e3c2df2ef4d\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.875789 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/845832d9-625c-452e-b900-4e3c2df2ef4d-container-storage-root\") pod \"845832d9-625c-452e-b900-4e3c2df2ef4d\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.875840 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/845832d9-625c-452e-b900-4e3c2df2ef4d-build-blob-cache\") pod \"845832d9-625c-452e-b900-4e3c2df2ef4d\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.875925 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/845832d9-625c-452e-b900-4e3c2df2ef4d-container-storage-run\") pod \"845832d9-625c-452e-b900-4e3c2df2ef4d\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.875969 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/845832d9-625c-452e-b900-4e3c2df2ef4d-build-ca-bundles\") pod \"845832d9-625c-452e-b900-4e3c2df2ef4d\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.876014 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/845832d9-625c-452e-b900-4e3c2df2ef4d-builder-dockercfg-8h4xs-push\") pod \"845832d9-625c-452e-b900-4e3c2df2ef4d\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.876137 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/845832d9-625c-452e-b900-4e3c2df2ef4d-builder-dockercfg-8h4xs-pull\") pod \"845832d9-625c-452e-b900-4e3c2df2ef4d\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.876169 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/845832d9-625c-452e-b900-4e3c2df2ef4d-build-system-configs\") pod \"845832d9-625c-452e-b900-4e3c2df2ef4d\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.876198 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7glkd\" (UniqueName: \"kubernetes.io/projected/845832d9-625c-452e-b900-4e3c2df2ef4d-kube-api-access-7glkd\") pod \"845832d9-625c-452e-b900-4e3c2df2ef4d\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.876267 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/845832d9-625c-452e-b900-4e3c2df2ef4d-node-pullsecrets\") pod \"845832d9-625c-452e-b900-4e3c2df2ef4d\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.876352 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/845832d9-625c-452e-b900-4e3c2df2ef4d-build-proxy-ca-bundles\") pod \"845832d9-625c-452e-b900-4e3c2df2ef4d\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.876398 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/845832d9-625c-452e-b900-4e3c2df2ef4d-buildworkdir\") pod \"845832d9-625c-452e-b900-4e3c2df2ef4d\" (UID: \"845832d9-625c-452e-b900-4e3c2df2ef4d\") " Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.878425 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/845832d9-625c-452e-b900-4e3c2df2ef4d-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "845832d9-625c-452e-b900-4e3c2df2ef4d" (UID: "845832d9-625c-452e-b900-4e3c2df2ef4d"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.878607 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/845832d9-625c-452e-b900-4e3c2df2ef4d-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "845832d9-625c-452e-b900-4e3c2df2ef4d" (UID: "845832d9-625c-452e-b900-4e3c2df2ef4d"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.879275 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/845832d9-625c-452e-b900-4e3c2df2ef4d-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "845832d9-625c-452e-b900-4e3c2df2ef4d" (UID: "845832d9-625c-452e-b900-4e3c2df2ef4d"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.881774 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/845832d9-625c-452e-b900-4e3c2df2ef4d-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "845832d9-625c-452e-b900-4e3c2df2ef4d" (UID: "845832d9-625c-452e-b900-4e3c2df2ef4d"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.882256 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/845832d9-625c-452e-b900-4e3c2df2ef4d-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "845832d9-625c-452e-b900-4e3c2df2ef4d" (UID: "845832d9-625c-452e-b900-4e3c2df2ef4d"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.882617 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/845832d9-625c-452e-b900-4e3c2df2ef4d-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "845832d9-625c-452e-b900-4e3c2df2ef4d" (UID: "845832d9-625c-452e-b900-4e3c2df2ef4d"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.885752 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/845832d9-625c-452e-b900-4e3c2df2ef4d-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "845832d9-625c-452e-b900-4e3c2df2ef4d" (UID: "845832d9-625c-452e-b900-4e3c2df2ef4d"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.885780 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/845832d9-625c-452e-b900-4e3c2df2ef4d-builder-dockercfg-8h4xs-push" (OuterVolumeSpecName: "builder-dockercfg-8h4xs-push") pod "845832d9-625c-452e-b900-4e3c2df2ef4d" (UID: "845832d9-625c-452e-b900-4e3c2df2ef4d"). InnerVolumeSpecName "builder-dockercfg-8h4xs-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.889883 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_845832d9-625c-452e-b900-4e3c2df2ef4d/docker-build/0.log" Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.893397 4856 generic.go:334] "Generic (PLEG): container finished" podID="845832d9-625c-452e-b900-4e3c2df2ef4d" containerID="00702564f2498cf2bf404ba97ad4e9284443fddbcee3e0fe6e8490fc7acce163" exitCode=1 Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.893463 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"845832d9-625c-452e-b900-4e3c2df2ef4d","Type":"ContainerDied","Data":"00702564f2498cf2bf404ba97ad4e9284443fddbcee3e0fe6e8490fc7acce163"} Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.893508 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"845832d9-625c-452e-b900-4e3c2df2ef4d","Type":"ContainerDied","Data":"37f3f15c961670bf793cb2d570bf81f5b90a0fceb924638d55df34f9da6e88b4"} Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.893587 4856 scope.go:117] "RemoveContainer" containerID="00702564f2498cf2bf404ba97ad4e9284443fddbcee3e0fe6e8490fc7acce163" Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.893800 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.895785 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/845832d9-625c-452e-b900-4e3c2df2ef4d-builder-dockercfg-8h4xs-pull" (OuterVolumeSpecName: "builder-dockercfg-8h4xs-pull") pod "845832d9-625c-452e-b900-4e3c2df2ef4d" (UID: "845832d9-625c-452e-b900-4e3c2df2ef4d"). InnerVolumeSpecName "builder-dockercfg-8h4xs-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.910072 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/845832d9-625c-452e-b900-4e3c2df2ef4d-kube-api-access-7glkd" (OuterVolumeSpecName: "kube-api-access-7glkd") pod "845832d9-625c-452e-b900-4e3c2df2ef4d" (UID: "845832d9-625c-452e-b900-4e3c2df2ef4d"). InnerVolumeSpecName "kube-api-access-7glkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.970361 4856 scope.go:117] "RemoveContainer" containerID="86acff0987e388de3d9ce01fcd9e358d21cc1ecfc613874a4c26f37c1a31ec0b" Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.979219 4856 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/845832d9-625c-452e-b900-4e3c2df2ef4d-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.979280 4856 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/845832d9-625c-452e-b900-4e3c2df2ef4d-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.979367 4856 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/845832d9-625c-452e-b900-4e3c2df2ef4d-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.979382 4856 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/845832d9-625c-452e-b900-4e3c2df2ef4d-builder-dockercfg-8h4xs-push\") on node \"crc\" DevicePath \"\"" Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.979418 4856 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/845832d9-625c-452e-b900-4e3c2df2ef4d-builder-dockercfg-8h4xs-pull\") on node \"crc\" DevicePath \"\"" Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.979432 4856 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/845832d9-625c-452e-b900-4e3c2df2ef4d-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.979440 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7glkd\" (UniqueName: \"kubernetes.io/projected/845832d9-625c-452e-b900-4e3c2df2ef4d-kube-api-access-7glkd\") on node \"crc\" DevicePath \"\"" Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.979453 4856 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/845832d9-625c-452e-b900-4e3c2df2ef4d-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.979461 4856 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/845832d9-625c-452e-b900-4e3c2df2ef4d-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 17:17:19 crc kubenswrapper[4856]: I0126 17:17:19.979471 4856 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/845832d9-625c-452e-b900-4e3c2df2ef4d-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 26 17:17:20 crc kubenswrapper[4856]: I0126 17:17:20.000615 4856 scope.go:117] "RemoveContainer" containerID="00702564f2498cf2bf404ba97ad4e9284443fddbcee3e0fe6e8490fc7acce163" Jan 26 17:17:20 crc kubenswrapper[4856]: E0126 17:17:20.001342 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00702564f2498cf2bf404ba97ad4e9284443fddbcee3e0fe6e8490fc7acce163\": container with ID starting with 00702564f2498cf2bf404ba97ad4e9284443fddbcee3e0fe6e8490fc7acce163 not found: ID does not exist" containerID="00702564f2498cf2bf404ba97ad4e9284443fddbcee3e0fe6e8490fc7acce163" Jan 26 17:17:20 crc kubenswrapper[4856]: I0126 17:17:20.001455 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00702564f2498cf2bf404ba97ad4e9284443fddbcee3e0fe6e8490fc7acce163"} err="failed to get container status \"00702564f2498cf2bf404ba97ad4e9284443fddbcee3e0fe6e8490fc7acce163\": rpc error: code = NotFound desc = could not find container \"00702564f2498cf2bf404ba97ad4e9284443fddbcee3e0fe6e8490fc7acce163\": container with ID starting with 00702564f2498cf2bf404ba97ad4e9284443fddbcee3e0fe6e8490fc7acce163 not found: ID does not exist" Jan 26 17:17:20 crc kubenswrapper[4856]: I0126 17:17:20.001514 4856 scope.go:117] "RemoveContainer" containerID="86acff0987e388de3d9ce01fcd9e358d21cc1ecfc613874a4c26f37c1a31ec0b" Jan 26 17:17:20 crc kubenswrapper[4856]: E0126 17:17:20.002065 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86acff0987e388de3d9ce01fcd9e358d21cc1ecfc613874a4c26f37c1a31ec0b\": container with ID starting with 86acff0987e388de3d9ce01fcd9e358d21cc1ecfc613874a4c26f37c1a31ec0b not found: ID does not exist" containerID="86acff0987e388de3d9ce01fcd9e358d21cc1ecfc613874a4c26f37c1a31ec0b" Jan 26 17:17:20 crc kubenswrapper[4856]: I0126 17:17:20.002104 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86acff0987e388de3d9ce01fcd9e358d21cc1ecfc613874a4c26f37c1a31ec0b"} err="failed to get container status \"86acff0987e388de3d9ce01fcd9e358d21cc1ecfc613874a4c26f37c1a31ec0b\": rpc error: code = NotFound desc = could not find container \"86acff0987e388de3d9ce01fcd9e358d21cc1ecfc613874a4c26f37c1a31ec0b\": container with ID starting with 86acff0987e388de3d9ce01fcd9e358d21cc1ecfc613874a4c26f37c1a31ec0b not found: ID does not exist" Jan 26 17:17:20 crc kubenswrapper[4856]: I0126 17:17:20.005682 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/845832d9-625c-452e-b900-4e3c2df2ef4d-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "845832d9-625c-452e-b900-4e3c2df2ef4d" (UID: "845832d9-625c-452e-b900-4e3c2df2ef4d"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:17:20 crc kubenswrapper[4856]: I0126 17:17:20.055499 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/845832d9-625c-452e-b900-4e3c2df2ef4d-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "845832d9-625c-452e-b900-4e3c2df2ef4d" (UID: "845832d9-625c-452e-b900-4e3c2df2ef4d"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:17:20 crc kubenswrapper[4856]: I0126 17:17:20.080629 4856 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/845832d9-625c-452e-b900-4e3c2df2ef4d-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 26 17:17:20 crc kubenswrapper[4856]: I0126 17:17:20.080707 4856 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/845832d9-625c-452e-b900-4e3c2df2ef4d-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 26 17:17:20 crc kubenswrapper[4856]: I0126 17:17:20.228728 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 26 17:17:20 crc kubenswrapper[4856]: I0126 17:17:20.235980 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.074009 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 26 17:17:21 crc kubenswrapper[4856]: E0126 17:17:21.074631 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="845832d9-625c-452e-b900-4e3c2df2ef4d" containerName="docker-build" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.074652 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="845832d9-625c-452e-b900-4e3c2df2ef4d" containerName="docker-build" Jan 26 17:17:21 crc kubenswrapper[4856]: E0126 17:17:21.074667 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="845832d9-625c-452e-b900-4e3c2df2ef4d" containerName="manage-dockerfile" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.074674 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="845832d9-625c-452e-b900-4e3c2df2ef4d" containerName="manage-dockerfile" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.074855 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="845832d9-625c-452e-b900-4e3c2df2ef4d" containerName="docker-build" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.077587 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.080998 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-sys-config" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.081050 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-ca" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.081482 4856 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-8h4xs" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.081620 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-global-ca" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.094374 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.115739 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vg4d\" (UniqueName: \"kubernetes.io/projected/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-kube-api-access-9vg4d\") pod \"sg-core-2-build\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " pod="service-telemetry/sg-core-2-build" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.116252 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-build-system-configs\") pod \"sg-core-2-build\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " pod="service-telemetry/sg-core-2-build" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.116409 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-container-storage-root\") pod \"sg-core-2-build\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " pod="service-telemetry/sg-core-2-build" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.116750 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " pod="service-telemetry/sg-core-2-build" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.116844 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " pod="service-telemetry/sg-core-2-build" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.117045 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-buildcachedir\") pod \"sg-core-2-build\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " pod="service-telemetry/sg-core-2-build" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.117143 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-builder-dockercfg-8h4xs-push\") pod \"sg-core-2-build\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " pod="service-telemetry/sg-core-2-build" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.117229 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-buildworkdir\") pod \"sg-core-2-build\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " pod="service-telemetry/sg-core-2-build" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.117369 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-container-storage-run\") pod \"sg-core-2-build\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " pod="service-telemetry/sg-core-2-build" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.117512 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " pod="service-telemetry/sg-core-2-build" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.117616 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-builder-dockercfg-8h4xs-pull\") pod \"sg-core-2-build\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " pod="service-telemetry/sg-core-2-build" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.117723 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " pod="service-telemetry/sg-core-2-build" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.219654 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vg4d\" (UniqueName: \"kubernetes.io/projected/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-kube-api-access-9vg4d\") pod \"sg-core-2-build\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " pod="service-telemetry/sg-core-2-build" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.220569 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-build-system-configs\") pod \"sg-core-2-build\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " pod="service-telemetry/sg-core-2-build" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.220728 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-container-storage-root\") pod \"sg-core-2-build\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " pod="service-telemetry/sg-core-2-build" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.220824 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " pod="service-telemetry/sg-core-2-build" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.220876 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " pod="service-telemetry/sg-core-2-build" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.220936 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-buildcachedir\") pod \"sg-core-2-build\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " pod="service-telemetry/sg-core-2-build" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.221015 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-builder-dockercfg-8h4xs-push\") pod \"sg-core-2-build\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " pod="service-telemetry/sg-core-2-build" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.221081 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-buildworkdir\") pod \"sg-core-2-build\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " pod="service-telemetry/sg-core-2-build" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.221104 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-container-storage-run\") pod \"sg-core-2-build\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " pod="service-telemetry/sg-core-2-build" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.221125 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " pod="service-telemetry/sg-core-2-build" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.221141 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " pod="service-telemetry/sg-core-2-build" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.221153 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-builder-dockercfg-8h4xs-pull\") pod \"sg-core-2-build\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " pod="service-telemetry/sg-core-2-build" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.221188 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-container-storage-root\") pod \"sg-core-2-build\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " pod="service-telemetry/sg-core-2-build" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.221235 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " pod="service-telemetry/sg-core-2-build" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.221361 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " pod="service-telemetry/sg-core-2-build" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.221453 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-build-system-configs\") pod \"sg-core-2-build\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " pod="service-telemetry/sg-core-2-build" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.221594 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-buildcachedir\") pod \"sg-core-2-build\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " pod="service-telemetry/sg-core-2-build" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.221688 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-buildworkdir\") pod \"sg-core-2-build\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " pod="service-telemetry/sg-core-2-build" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.221861 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-container-storage-run\") pod \"sg-core-2-build\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " pod="service-telemetry/sg-core-2-build" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.221905 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " pod="service-telemetry/sg-core-2-build" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.222358 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " pod="service-telemetry/sg-core-2-build" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.225982 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-builder-dockercfg-8h4xs-push\") pod \"sg-core-2-build\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " pod="service-telemetry/sg-core-2-build" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.226049 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-builder-dockercfg-8h4xs-pull\") pod \"sg-core-2-build\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " pod="service-telemetry/sg-core-2-build" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.240757 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vg4d\" (UniqueName: \"kubernetes.io/projected/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-kube-api-access-9vg4d\") pod \"sg-core-2-build\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " pod="service-telemetry/sg-core-2-build" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.404824 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="845832d9-625c-452e-b900-4e3c2df2ef4d" path="/var/lib/kubelet/pods/845832d9-625c-452e-b900-4e3c2df2ef4d/volumes" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.442953 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.667444 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 26 17:17:21 crc kubenswrapper[4856]: I0126 17:17:21.907673 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906","Type":"ContainerStarted","Data":"581a54ce580bf89c79c4ca5c090533d91b572d9478534f85f14b11f0a695bf7c"} Jan 26 17:17:22 crc kubenswrapper[4856]: I0126 17:17:22.918833 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906","Type":"ContainerStarted","Data":"c3d2079125adda4fff730760ad4a3c9ff84f0c4bb91f73b927cd7c20ec1365fc"} Jan 26 17:17:24 crc kubenswrapper[4856]: I0126 17:17:24.105442 4856 generic.go:334] "Generic (PLEG): container finished" podID="230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906" containerID="c3d2079125adda4fff730760ad4a3c9ff84f0c4bb91f73b927cd7c20ec1365fc" exitCode=0 Jan 26 17:17:24 crc kubenswrapper[4856]: I0126 17:17:24.106448 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906","Type":"ContainerDied","Data":"c3d2079125adda4fff730760ad4a3c9ff84f0c4bb91f73b927cd7c20ec1365fc"} Jan 26 17:17:25 crc kubenswrapper[4856]: I0126 17:17:25.115448 4856 generic.go:334] "Generic (PLEG): container finished" podID="230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906" containerID="1e916af40f386b7c7eeeca4a494f1a4ea6edb3f8a20e51f7232fa3f4df21515f" exitCode=0 Jan 26 17:17:25 crc kubenswrapper[4856]: I0126 17:17:25.115505 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906","Type":"ContainerDied","Data":"1e916af40f386b7c7eeeca4a494f1a4ea6edb3f8a20e51f7232fa3f4df21515f"} Jan 26 17:17:25 crc kubenswrapper[4856]: I0126 17:17:25.165385 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906/manage-dockerfile/0.log" Jan 26 17:17:26 crc kubenswrapper[4856]: I0126 17:17:26.126808 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906","Type":"ContainerStarted","Data":"0c8b0cbcc314d24ca3abe306e12646e8cc81d7ef948190b4214ea9f83847d71e"} Jan 26 17:17:26 crc kubenswrapper[4856]: I0126 17:17:26.165656 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-2-build" podStartSLOduration=5.165640123 podStartE2EDuration="5.165640123s" podCreationTimestamp="2026-01-26 17:17:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:17:26.165225321 +0000 UTC m=+1142.118479332" watchObservedRunningTime="2026-01-26 17:17:26.165640123 +0000 UTC m=+1142.118894104" Jan 26 17:18:56 crc kubenswrapper[4856]: I0126 17:18:56.939229 4856 patch_prober.go:28] interesting pod/machine-config-daemon-xm9cq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:18:56 crc kubenswrapper[4856]: I0126 17:18:56.940159 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:19:26 crc kubenswrapper[4856]: I0126 17:19:26.938865 4856 patch_prober.go:28] interesting pod/machine-config-daemon-xm9cq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:19:26 crc kubenswrapper[4856]: I0126 17:19:26.939459 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:19:56 crc kubenswrapper[4856]: I0126 17:19:56.938637 4856 patch_prober.go:28] interesting pod/machine-config-daemon-xm9cq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:19:56 crc kubenswrapper[4856]: I0126 17:19:56.939238 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:19:56 crc kubenswrapper[4856]: I0126 17:19:56.939296 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" Jan 26 17:19:56 crc kubenswrapper[4856]: I0126 17:19:56.940145 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5846ab4d870be5fcbab796c3e27690d2c13d129480d6fcd21b3b0d1c535f0cff"} pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 17:19:56 crc kubenswrapper[4856]: I0126 17:19:56.940232 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" containerName="machine-config-daemon" containerID="cri-o://5846ab4d870be5fcbab796c3e27690d2c13d129480d6fcd21b3b0d1c535f0cff" gracePeriod=600 Jan 26 17:19:57 crc kubenswrapper[4856]: I0126 17:19:57.851713 4856 generic.go:334] "Generic (PLEG): container finished" podID="63c75ede-5170-4db0-811b-5217ef8d72b3" containerID="5846ab4d870be5fcbab796c3e27690d2c13d129480d6fcd21b3b0d1c535f0cff" exitCode=0 Jan 26 17:19:57 crc kubenswrapper[4856]: I0126 17:19:57.851747 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" event={"ID":"63c75ede-5170-4db0-811b-5217ef8d72b3","Type":"ContainerDied","Data":"5846ab4d870be5fcbab796c3e27690d2c13d129480d6fcd21b3b0d1c535f0cff"} Jan 26 17:19:57 crc kubenswrapper[4856]: I0126 17:19:57.852565 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" event={"ID":"63c75ede-5170-4db0-811b-5217ef8d72b3","Type":"ContainerStarted","Data":"cda3cdbac0b1e3c460ee9a5617b9c5fd59d4db5c67a69b81c9224934be12563c"} Jan 26 17:19:57 crc kubenswrapper[4856]: I0126 17:19:57.852640 4856 scope.go:117] "RemoveContainer" containerID="fdaad4602089daad40b0395fbc761e615a8ba2a94c8f5b977142a787034cddb7" Jan 26 17:20:46 crc kubenswrapper[4856]: E0126 17:20:46.997977 4856 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.604s" Jan 26 17:21:19 crc kubenswrapper[4856]: I0126 17:21:19.253610 4856 generic.go:334] "Generic (PLEG): container finished" podID="230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906" containerID="0c8b0cbcc314d24ca3abe306e12646e8cc81d7ef948190b4214ea9f83847d71e" exitCode=0 Jan 26 17:21:19 crc kubenswrapper[4856]: I0126 17:21:19.253748 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906","Type":"ContainerDied","Data":"0c8b0cbcc314d24ca3abe306e12646e8cc81d7ef948190b4214ea9f83847d71e"} Jan 26 17:21:20 crc kubenswrapper[4856]: I0126 17:21:20.564739 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 26 17:21:20 crc kubenswrapper[4856]: I0126 17:21:20.686923 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-build-ca-bundles\") pod \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " Jan 26 17:21:20 crc kubenswrapper[4856]: I0126 17:21:20.686996 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-buildworkdir\") pod \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " Jan 26 17:21:20 crc kubenswrapper[4856]: I0126 17:21:20.687024 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-container-storage-root\") pod \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " Jan 26 17:21:20 crc kubenswrapper[4856]: I0126 17:21:20.687083 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-builder-dockercfg-8h4xs-pull\") pod \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " Jan 26 17:21:20 crc kubenswrapper[4856]: I0126 17:21:20.687120 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-builder-dockercfg-8h4xs-push\") pod \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " Jan 26 17:21:20 crc kubenswrapper[4856]: I0126 17:21:20.687146 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-build-system-configs\") pod \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " Jan 26 17:21:20 crc kubenswrapper[4856]: I0126 17:21:20.687169 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-build-blob-cache\") pod \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " Jan 26 17:21:20 crc kubenswrapper[4856]: I0126 17:21:20.687205 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-container-storage-run\") pod \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " Jan 26 17:21:20 crc kubenswrapper[4856]: I0126 17:21:20.687242 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-node-pullsecrets\") pod \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " Jan 26 17:21:20 crc kubenswrapper[4856]: I0126 17:21:20.687264 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-build-proxy-ca-bundles\") pod \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " Jan 26 17:21:20 crc kubenswrapper[4856]: I0126 17:21:20.687288 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vg4d\" (UniqueName: \"kubernetes.io/projected/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-kube-api-access-9vg4d\") pod \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " Jan 26 17:21:20 crc kubenswrapper[4856]: I0126 17:21:20.687333 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-buildcachedir\") pod \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\" (UID: \"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906\") " Jan 26 17:21:20 crc kubenswrapper[4856]: I0126 17:21:20.687600 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906" (UID: "230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:21:20 crc kubenswrapper[4856]: I0126 17:21:20.687640 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906" (UID: "230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:21:20 crc kubenswrapper[4856]: I0126 17:21:20.689232 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906" (UID: "230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:21:20 crc kubenswrapper[4856]: I0126 17:21:20.689244 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906" (UID: "230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:21:20 crc kubenswrapper[4856]: I0126 17:21:20.690328 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906" (UID: "230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:21:20 crc kubenswrapper[4856]: I0126 17:21:20.690764 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906" (UID: "230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:21:20 crc kubenswrapper[4856]: I0126 17:21:20.693408 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-builder-dockercfg-8h4xs-pull" (OuterVolumeSpecName: "builder-dockercfg-8h4xs-pull") pod "230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906" (UID: "230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906"). InnerVolumeSpecName "builder-dockercfg-8h4xs-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:21:20 crc kubenswrapper[4856]: I0126 17:21:20.693427 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-builder-dockercfg-8h4xs-push" (OuterVolumeSpecName: "builder-dockercfg-8h4xs-push") pod "230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906" (UID: "230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906"). InnerVolumeSpecName "builder-dockercfg-8h4xs-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:21:20 crc kubenswrapper[4856]: I0126 17:21:20.703187 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-kube-api-access-9vg4d" (OuterVolumeSpecName: "kube-api-access-9vg4d") pod "230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906" (UID: "230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906"). InnerVolumeSpecName "kube-api-access-9vg4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:21:20 crc kubenswrapper[4856]: I0126 17:21:20.713256 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906" (UID: "230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:21:20 crc kubenswrapper[4856]: I0126 17:21:20.788733 4856 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-builder-dockercfg-8h4xs-push\") on node \"crc\" DevicePath \"\"" Jan 26 17:21:20 crc kubenswrapper[4856]: I0126 17:21:20.788775 4856 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 26 17:21:20 crc kubenswrapper[4856]: I0126 17:21:20.788789 4856 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 26 17:21:20 crc kubenswrapper[4856]: I0126 17:21:20.788803 4856 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 26 17:21:20 crc kubenswrapper[4856]: I0126 17:21:20.788816 4856 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 17:21:20 crc kubenswrapper[4856]: I0126 17:21:20.788827 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vg4d\" (UniqueName: \"kubernetes.io/projected/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-kube-api-access-9vg4d\") on node \"crc\" DevicePath \"\"" Jan 26 17:21:20 crc kubenswrapper[4856]: I0126 17:21:20.788838 4856 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 26 17:21:20 crc kubenswrapper[4856]: I0126 17:21:20.788850 4856 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 17:21:20 crc kubenswrapper[4856]: I0126 17:21:20.788862 4856 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 26 17:21:20 crc kubenswrapper[4856]: I0126 17:21:20.788875 4856 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-builder-dockercfg-8h4xs-pull\") on node \"crc\" DevicePath \"\"" Jan 26 17:21:21 crc kubenswrapper[4856]: I0126 17:21:21.008433 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906" (UID: "230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:21:21 crc kubenswrapper[4856]: I0126 17:21:21.093289 4856 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 26 17:21:21 crc kubenswrapper[4856]: I0126 17:21:21.269509 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906","Type":"ContainerDied","Data":"581a54ce580bf89c79c4ca5c090533d91b572d9478534f85f14b11f0a695bf7c"} Jan 26 17:21:21 crc kubenswrapper[4856]: I0126 17:21:21.269589 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="581a54ce580bf89c79c4ca5c090533d91b572d9478534f85f14b11f0a695bf7c" Jan 26 17:21:21 crc kubenswrapper[4856]: I0126 17:21:21.269915 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 26 17:21:23 crc kubenswrapper[4856]: I0126 17:21:23.156301 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906" (UID: "230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:21:23 crc kubenswrapper[4856]: I0126 17:21:23.225532 4856 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.051504 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 26 17:21:26 crc kubenswrapper[4856]: E0126 17:21:26.052239 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906" containerName="docker-build" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.052264 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906" containerName="docker-build" Jan 26 17:21:26 crc kubenswrapper[4856]: E0126 17:21:26.052283 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906" containerName="manage-dockerfile" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.052292 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906" containerName="manage-dockerfile" Jan 26 17:21:26 crc kubenswrapper[4856]: E0126 17:21:26.052308 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906" containerName="git-clone" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.052317 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906" containerName="git-clone" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.052547 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="230fa7ab-5d3d-40d2-bcf1-6ee5a68a3906" containerName="docker-build" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.053437 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.055567 4856 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-8h4xs" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.055849 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-ca" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.059043 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-sys-config" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.059671 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-global-ca" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.071486 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.169968 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d55d4558-c971-4a40-b1cb-41389cbf11c3-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " pod="service-telemetry/sg-bridge-1-build" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.170035 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpgwg\" (UniqueName: \"kubernetes.io/projected/d55d4558-c971-4a40-b1cb-41389cbf11c3-kube-api-access-jpgwg\") pod \"sg-bridge-1-build\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " pod="service-telemetry/sg-bridge-1-build" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.170071 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d55d4558-c971-4a40-b1cb-41389cbf11c3-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " pod="service-telemetry/sg-bridge-1-build" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.170109 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d55d4558-c971-4a40-b1cb-41389cbf11c3-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " pod="service-telemetry/sg-bridge-1-build" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.170229 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d55d4558-c971-4a40-b1cb-41389cbf11c3-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " pod="service-telemetry/sg-bridge-1-build" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.170313 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d55d4558-c971-4a40-b1cb-41389cbf11c3-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " pod="service-telemetry/sg-bridge-1-build" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.170344 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d55d4558-c971-4a40-b1cb-41389cbf11c3-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " pod="service-telemetry/sg-bridge-1-build" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.170382 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d55d4558-c971-4a40-b1cb-41389cbf11c3-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " pod="service-telemetry/sg-bridge-1-build" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.170483 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d55d4558-c971-4a40-b1cb-41389cbf11c3-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " pod="service-telemetry/sg-bridge-1-build" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.170550 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/d55d4558-c971-4a40-b1cb-41389cbf11c3-builder-dockercfg-8h4xs-push\") pod \"sg-bridge-1-build\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " pod="service-telemetry/sg-bridge-1-build" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.170587 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/d55d4558-c971-4a40-b1cb-41389cbf11c3-builder-dockercfg-8h4xs-pull\") pod \"sg-bridge-1-build\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " pod="service-telemetry/sg-bridge-1-build" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.170607 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d55d4558-c971-4a40-b1cb-41389cbf11c3-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " pod="service-telemetry/sg-bridge-1-build" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.271897 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpgwg\" (UniqueName: \"kubernetes.io/projected/d55d4558-c971-4a40-b1cb-41389cbf11c3-kube-api-access-jpgwg\") pod \"sg-bridge-1-build\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " pod="service-telemetry/sg-bridge-1-build" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.271953 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d55d4558-c971-4a40-b1cb-41389cbf11c3-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " pod="service-telemetry/sg-bridge-1-build" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.271998 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d55d4558-c971-4a40-b1cb-41389cbf11c3-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " pod="service-telemetry/sg-bridge-1-build" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.272022 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d55d4558-c971-4a40-b1cb-41389cbf11c3-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " pod="service-telemetry/sg-bridge-1-build" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.272054 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d55d4558-c971-4a40-b1cb-41389cbf11c3-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " pod="service-telemetry/sg-bridge-1-build" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.272079 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d55d4558-c971-4a40-b1cb-41389cbf11c3-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " pod="service-telemetry/sg-bridge-1-build" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.272106 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d55d4558-c971-4a40-b1cb-41389cbf11c3-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " pod="service-telemetry/sg-bridge-1-build" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.272098 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d55d4558-c971-4a40-b1cb-41389cbf11c3-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " pod="service-telemetry/sg-bridge-1-build" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.272154 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d55d4558-c971-4a40-b1cb-41389cbf11c3-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " pod="service-telemetry/sg-bridge-1-build" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.272187 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/d55d4558-c971-4a40-b1cb-41389cbf11c3-builder-dockercfg-8h4xs-push\") pod \"sg-bridge-1-build\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " pod="service-telemetry/sg-bridge-1-build" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.272219 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/d55d4558-c971-4a40-b1cb-41389cbf11c3-builder-dockercfg-8h4xs-pull\") pod \"sg-bridge-1-build\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " pod="service-telemetry/sg-bridge-1-build" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.272246 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d55d4558-c971-4a40-b1cb-41389cbf11c3-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " pod="service-telemetry/sg-bridge-1-build" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.272307 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d55d4558-c971-4a40-b1cb-41389cbf11c3-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " pod="service-telemetry/sg-bridge-1-build" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.272418 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d55d4558-c971-4a40-b1cb-41389cbf11c3-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " pod="service-telemetry/sg-bridge-1-build" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.273028 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d55d4558-c971-4a40-b1cb-41389cbf11c3-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " pod="service-telemetry/sg-bridge-1-build" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.273298 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d55d4558-c971-4a40-b1cb-41389cbf11c3-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " pod="service-telemetry/sg-bridge-1-build" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.273298 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d55d4558-c971-4a40-b1cb-41389cbf11c3-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " pod="service-telemetry/sg-bridge-1-build" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.273467 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d55d4558-c971-4a40-b1cb-41389cbf11c3-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " pod="service-telemetry/sg-bridge-1-build" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.273479 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d55d4558-c971-4a40-b1cb-41389cbf11c3-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " pod="service-telemetry/sg-bridge-1-build" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.273714 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d55d4558-c971-4a40-b1cb-41389cbf11c3-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " pod="service-telemetry/sg-bridge-1-build" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.274244 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d55d4558-c971-4a40-b1cb-41389cbf11c3-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " pod="service-telemetry/sg-bridge-1-build" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.283299 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/d55d4558-c971-4a40-b1cb-41389cbf11c3-builder-dockercfg-8h4xs-push\") pod \"sg-bridge-1-build\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " pod="service-telemetry/sg-bridge-1-build" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.283297 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/d55d4558-c971-4a40-b1cb-41389cbf11c3-builder-dockercfg-8h4xs-pull\") pod \"sg-bridge-1-build\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " pod="service-telemetry/sg-bridge-1-build" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.290935 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpgwg\" (UniqueName: \"kubernetes.io/projected/d55d4558-c971-4a40-b1cb-41389cbf11c3-kube-api-access-jpgwg\") pod \"sg-bridge-1-build\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " pod="service-telemetry/sg-bridge-1-build" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.370262 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 26 17:21:26 crc kubenswrapper[4856]: I0126 17:21:26.612326 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 26 17:21:27 crc kubenswrapper[4856]: I0126 17:21:27.307345 4856 generic.go:334] "Generic (PLEG): container finished" podID="d55d4558-c971-4a40-b1cb-41389cbf11c3" containerID="37c0850dfb46215185250da7eccd6ad1561ff7e374d7cfc24a1386d6bf8bcf2f" exitCode=0 Jan 26 17:21:27 crc kubenswrapper[4856]: I0126 17:21:27.307410 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"d55d4558-c971-4a40-b1cb-41389cbf11c3","Type":"ContainerDied","Data":"37c0850dfb46215185250da7eccd6ad1561ff7e374d7cfc24a1386d6bf8bcf2f"} Jan 26 17:21:27 crc kubenswrapper[4856]: I0126 17:21:27.307712 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"d55d4558-c971-4a40-b1cb-41389cbf11c3","Type":"ContainerStarted","Data":"50a0301e7718c33315fdaefdd9a7a9347ba078d7d7dbb1b07dc7689e421c1452"} Jan 26 17:21:28 crc kubenswrapper[4856]: I0126 17:21:28.317613 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"d55d4558-c971-4a40-b1cb-41389cbf11c3","Type":"ContainerStarted","Data":"1f422f325f8fa610d27451f9734bb53bd3fdded1c3e711a971293ebcfe442247"} Jan 26 17:21:28 crc kubenswrapper[4856]: I0126 17:21:28.354178 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-1-build" podStartSLOduration=2.354127105 podStartE2EDuration="2.354127105s" podCreationTimestamp="2026-01-26 17:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:21:28.349561454 +0000 UTC m=+1384.302815455" watchObservedRunningTime="2026-01-26 17:21:28.354127105 +0000 UTC m=+1384.307381086" Jan 26 17:21:35 crc kubenswrapper[4856]: I0126 17:21:35.398384 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_d55d4558-c971-4a40-b1cb-41389cbf11c3/docker-build/0.log" Jan 26 17:21:35 crc kubenswrapper[4856]: I0126 17:21:35.399233 4856 generic.go:334] "Generic (PLEG): container finished" podID="d55d4558-c971-4a40-b1cb-41389cbf11c3" containerID="1f422f325f8fa610d27451f9734bb53bd3fdded1c3e711a971293ebcfe442247" exitCode=1 Jan 26 17:21:35 crc kubenswrapper[4856]: I0126 17:21:35.402995 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"d55d4558-c971-4a40-b1cb-41389cbf11c3","Type":"ContainerDied","Data":"1f422f325f8fa610d27451f9734bb53bd3fdded1c3e711a971293ebcfe442247"} Jan 26 17:21:36 crc kubenswrapper[4856]: I0126 17:21:36.392911 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 26 17:21:36 crc kubenswrapper[4856]: I0126 17:21:36.608412 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_d55d4558-c971-4a40-b1cb-41389cbf11c3/docker-build/0.log" Jan 26 17:21:36 crc kubenswrapper[4856]: I0126 17:21:36.609067 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 26 17:21:36 crc kubenswrapper[4856]: I0126 17:21:36.715406 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d55d4558-c971-4a40-b1cb-41389cbf11c3-build-system-configs\") pod \"d55d4558-c971-4a40-b1cb-41389cbf11c3\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " Jan 26 17:21:36 crc kubenswrapper[4856]: I0126 17:21:36.715774 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d55d4558-c971-4a40-b1cb-41389cbf11c3-container-storage-run\") pod \"d55d4558-c971-4a40-b1cb-41389cbf11c3\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " Jan 26 17:21:36 crc kubenswrapper[4856]: I0126 17:21:36.715871 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/d55d4558-c971-4a40-b1cb-41389cbf11c3-builder-dockercfg-8h4xs-push\") pod \"d55d4558-c971-4a40-b1cb-41389cbf11c3\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " Jan 26 17:21:36 crc kubenswrapper[4856]: I0126 17:21:36.715976 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/d55d4558-c971-4a40-b1cb-41389cbf11c3-builder-dockercfg-8h4xs-pull\") pod \"d55d4558-c971-4a40-b1cb-41389cbf11c3\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " Jan 26 17:21:36 crc kubenswrapper[4856]: I0126 17:21:36.716053 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d55d4558-c971-4a40-b1cb-41389cbf11c3-node-pullsecrets\") pod \"d55d4558-c971-4a40-b1cb-41389cbf11c3\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " Jan 26 17:21:36 crc kubenswrapper[4856]: I0126 17:21:36.716154 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d55d4558-c971-4a40-b1cb-41389cbf11c3-buildcachedir\") pod \"d55d4558-c971-4a40-b1cb-41389cbf11c3\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " Jan 26 17:21:36 crc kubenswrapper[4856]: I0126 17:21:36.716297 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d55d4558-c971-4a40-b1cb-41389cbf11c3-build-proxy-ca-bundles\") pod \"d55d4558-c971-4a40-b1cb-41389cbf11c3\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " Jan 26 17:21:36 crc kubenswrapper[4856]: I0126 17:21:36.716754 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpgwg\" (UniqueName: \"kubernetes.io/projected/d55d4558-c971-4a40-b1cb-41389cbf11c3-kube-api-access-jpgwg\") pod \"d55d4558-c971-4a40-b1cb-41389cbf11c3\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " Jan 26 17:21:36 crc kubenswrapper[4856]: I0126 17:21:36.716893 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d55d4558-c971-4a40-b1cb-41389cbf11c3-container-storage-root\") pod \"d55d4558-c971-4a40-b1cb-41389cbf11c3\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " Jan 26 17:21:36 crc kubenswrapper[4856]: I0126 17:21:36.719741 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d55d4558-c971-4a40-b1cb-41389cbf11c3-build-ca-bundles\") pod \"d55d4558-c971-4a40-b1cb-41389cbf11c3\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " Jan 26 17:21:36 crc kubenswrapper[4856]: I0126 17:21:36.719894 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d55d4558-c971-4a40-b1cb-41389cbf11c3-build-blob-cache\") pod \"d55d4558-c971-4a40-b1cb-41389cbf11c3\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " Jan 26 17:21:36 crc kubenswrapper[4856]: I0126 17:21:36.719981 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d55d4558-c971-4a40-b1cb-41389cbf11c3-buildworkdir\") pod \"d55d4558-c971-4a40-b1cb-41389cbf11c3\" (UID: \"d55d4558-c971-4a40-b1cb-41389cbf11c3\") " Jan 26 17:21:36 crc kubenswrapper[4856]: I0126 17:21:36.716169 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d55d4558-c971-4a40-b1cb-41389cbf11c3-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "d55d4558-c971-4a40-b1cb-41389cbf11c3" (UID: "d55d4558-c971-4a40-b1cb-41389cbf11c3"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:21:36 crc kubenswrapper[4856]: I0126 17:21:36.716241 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d55d4558-c971-4a40-b1cb-41389cbf11c3-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "d55d4558-c971-4a40-b1cb-41389cbf11c3" (UID: "d55d4558-c971-4a40-b1cb-41389cbf11c3"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:21:36 crc kubenswrapper[4856]: I0126 17:21:36.716371 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d55d4558-c971-4a40-b1cb-41389cbf11c3-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "d55d4558-c971-4a40-b1cb-41389cbf11c3" (UID: "d55d4558-c971-4a40-b1cb-41389cbf11c3"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:21:36 crc kubenswrapper[4856]: I0126 17:21:36.716694 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d55d4558-c971-4a40-b1cb-41389cbf11c3-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "d55d4558-c971-4a40-b1cb-41389cbf11c3" (UID: "d55d4558-c971-4a40-b1cb-41389cbf11c3"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:21:36 crc kubenswrapper[4856]: I0126 17:21:36.716701 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d55d4558-c971-4a40-b1cb-41389cbf11c3-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "d55d4558-c971-4a40-b1cb-41389cbf11c3" (UID: "d55d4558-c971-4a40-b1cb-41389cbf11c3"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:21:36 crc kubenswrapper[4856]: I0126 17:21:36.720411 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d55d4558-c971-4a40-b1cb-41389cbf11c3-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "d55d4558-c971-4a40-b1cb-41389cbf11c3" (UID: "d55d4558-c971-4a40-b1cb-41389cbf11c3"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:21:36 crc kubenswrapper[4856]: I0126 17:21:36.720435 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d55d4558-c971-4a40-b1cb-41389cbf11c3-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "d55d4558-c971-4a40-b1cb-41389cbf11c3" (UID: "d55d4558-c971-4a40-b1cb-41389cbf11c3"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:21:36 crc kubenswrapper[4856]: I0126 17:21:36.721153 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d55d4558-c971-4a40-b1cb-41389cbf11c3-kube-api-access-jpgwg" (OuterVolumeSpecName: "kube-api-access-jpgwg") pod "d55d4558-c971-4a40-b1cb-41389cbf11c3" (UID: "d55d4558-c971-4a40-b1cb-41389cbf11c3"). InnerVolumeSpecName "kube-api-access-jpgwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:21:36 crc kubenswrapper[4856]: I0126 17:21:36.721231 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d55d4558-c971-4a40-b1cb-41389cbf11c3-builder-dockercfg-8h4xs-pull" (OuterVolumeSpecName: "builder-dockercfg-8h4xs-pull") pod "d55d4558-c971-4a40-b1cb-41389cbf11c3" (UID: "d55d4558-c971-4a40-b1cb-41389cbf11c3"). InnerVolumeSpecName "builder-dockercfg-8h4xs-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:21:36 crc kubenswrapper[4856]: I0126 17:21:36.721486 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d55d4558-c971-4a40-b1cb-41389cbf11c3-builder-dockercfg-8h4xs-push" (OuterVolumeSpecName: "builder-dockercfg-8h4xs-push") pod "d55d4558-c971-4a40-b1cb-41389cbf11c3" (UID: "d55d4558-c971-4a40-b1cb-41389cbf11c3"). InnerVolumeSpecName "builder-dockercfg-8h4xs-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:21:36 crc kubenswrapper[4856]: I0126 17:21:36.721691 4856 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d55d4558-c971-4a40-b1cb-41389cbf11c3-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 26 17:21:36 crc kubenswrapper[4856]: I0126 17:21:36.721714 4856 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d55d4558-c971-4a40-b1cb-41389cbf11c3-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 17:21:36 crc kubenswrapper[4856]: I0126 17:21:36.721725 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpgwg\" (UniqueName: \"kubernetes.io/projected/d55d4558-c971-4a40-b1cb-41389cbf11c3-kube-api-access-jpgwg\") on node \"crc\" DevicePath \"\"" Jan 26 17:21:36 crc kubenswrapper[4856]: I0126 17:21:36.721732 4856 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d55d4558-c971-4a40-b1cb-41389cbf11c3-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 17:21:36 crc kubenswrapper[4856]: I0126 17:21:36.721740 4856 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d55d4558-c971-4a40-b1cb-41389cbf11c3-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 26 17:21:36 crc kubenswrapper[4856]: I0126 17:21:36.721748 4856 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d55d4558-c971-4a40-b1cb-41389cbf11c3-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 26 17:21:36 crc kubenswrapper[4856]: I0126 17:21:36.721756 4856 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d55d4558-c971-4a40-b1cb-41389cbf11c3-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 26 17:21:36 crc kubenswrapper[4856]: I0126 17:21:36.721774 4856 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/d55d4558-c971-4a40-b1cb-41389cbf11c3-builder-dockercfg-8h4xs-push\") on node \"crc\" DevicePath \"\"" Jan 26 17:21:36 crc kubenswrapper[4856]: I0126 17:21:36.721787 4856 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/d55d4558-c971-4a40-b1cb-41389cbf11c3-builder-dockercfg-8h4xs-pull\") on node \"crc\" DevicePath \"\"" Jan 26 17:21:36 crc kubenswrapper[4856]: I0126 17:21:36.721797 4856 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d55d4558-c971-4a40-b1cb-41389cbf11c3-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 26 17:21:36 crc kubenswrapper[4856]: I0126 17:21:36.793540 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d55d4558-c971-4a40-b1cb-41389cbf11c3-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "d55d4558-c971-4a40-b1cb-41389cbf11c3" (UID: "d55d4558-c971-4a40-b1cb-41389cbf11c3"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:21:36 crc kubenswrapper[4856]: I0126 17:21:36.823306 4856 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d55d4558-c971-4a40-b1cb-41389cbf11c3-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 26 17:21:37 crc kubenswrapper[4856]: I0126 17:21:37.081066 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d55d4558-c971-4a40-b1cb-41389cbf11c3-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "d55d4558-c971-4a40-b1cb-41389cbf11c3" (UID: "d55d4558-c971-4a40-b1cb-41389cbf11c3"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:21:37 crc kubenswrapper[4856]: I0126 17:21:37.141794 4856 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d55d4558-c971-4a40-b1cb-41389cbf11c3-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 26 17:21:37 crc kubenswrapper[4856]: I0126 17:21:37.411206 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_d55d4558-c971-4a40-b1cb-41389cbf11c3/docker-build/0.log" Jan 26 17:21:37 crc kubenswrapper[4856]: I0126 17:21:37.411794 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"d55d4558-c971-4a40-b1cb-41389cbf11c3","Type":"ContainerDied","Data":"50a0301e7718c33315fdaefdd9a7a9347ba078d7d7dbb1b07dc7689e421c1452"} Jan 26 17:21:37 crc kubenswrapper[4856]: I0126 17:21:37.411843 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50a0301e7718c33315fdaefdd9a7a9347ba078d7d7dbb1b07dc7689e421c1452" Jan 26 17:21:37 crc kubenswrapper[4856]: I0126 17:21:37.411851 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 26 17:21:37 crc kubenswrapper[4856]: I0126 17:21:37.431413 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 26 17:21:37 crc kubenswrapper[4856]: I0126 17:21:37.437213 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.011031 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 26 17:21:38 crc kubenswrapper[4856]: E0126 17:21:38.011402 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d55d4558-c971-4a40-b1cb-41389cbf11c3" containerName="manage-dockerfile" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.011427 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="d55d4558-c971-4a40-b1cb-41389cbf11c3" containerName="manage-dockerfile" Jan 26 17:21:38 crc kubenswrapper[4856]: E0126 17:21:38.011458 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d55d4558-c971-4a40-b1cb-41389cbf11c3" containerName="docker-build" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.011467 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="d55d4558-c971-4a40-b1cb-41389cbf11c3" containerName="docker-build" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.011630 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="d55d4558-c971-4a40-b1cb-41389cbf11c3" containerName="docker-build" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.022702 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.027672 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-ca" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.027985 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-global-ca" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.028139 4856 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-8h4xs" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.028541 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-sys-config" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.039100 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.156291 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d6179047-d35a-4cad-93c5-2ac126d36b6c-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " pod="service-telemetry/sg-bridge-2-build" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.156349 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d6179047-d35a-4cad-93c5-2ac126d36b6c-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " pod="service-telemetry/sg-bridge-2-build" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.156480 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d6179047-d35a-4cad-93c5-2ac126d36b6c-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " pod="service-telemetry/sg-bridge-2-build" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.156579 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d6179047-d35a-4cad-93c5-2ac126d36b6c-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " pod="service-telemetry/sg-bridge-2-build" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.156620 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/d6179047-d35a-4cad-93c5-2ac126d36b6c-builder-dockercfg-8h4xs-pull\") pod \"sg-bridge-2-build\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " pod="service-telemetry/sg-bridge-2-build" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.156679 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d6179047-d35a-4cad-93c5-2ac126d36b6c-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " pod="service-telemetry/sg-bridge-2-build" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.156704 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjmqz\" (UniqueName: \"kubernetes.io/projected/d6179047-d35a-4cad-93c5-2ac126d36b6c-kube-api-access-sjmqz\") pod \"sg-bridge-2-build\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " pod="service-telemetry/sg-bridge-2-build" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.156721 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d6179047-d35a-4cad-93c5-2ac126d36b6c-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " pod="service-telemetry/sg-bridge-2-build" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.156742 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/d6179047-d35a-4cad-93c5-2ac126d36b6c-builder-dockercfg-8h4xs-push\") pod \"sg-bridge-2-build\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " pod="service-telemetry/sg-bridge-2-build" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.156768 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d6179047-d35a-4cad-93c5-2ac126d36b6c-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " pod="service-telemetry/sg-bridge-2-build" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.156834 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d6179047-d35a-4cad-93c5-2ac126d36b6c-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " pod="service-telemetry/sg-bridge-2-build" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.156878 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d6179047-d35a-4cad-93c5-2ac126d36b6c-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " pod="service-telemetry/sg-bridge-2-build" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.258093 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d6179047-d35a-4cad-93c5-2ac126d36b6c-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " pod="service-telemetry/sg-bridge-2-build" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.258160 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjmqz\" (UniqueName: \"kubernetes.io/projected/d6179047-d35a-4cad-93c5-2ac126d36b6c-kube-api-access-sjmqz\") pod \"sg-bridge-2-build\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " pod="service-telemetry/sg-bridge-2-build" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.258186 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d6179047-d35a-4cad-93c5-2ac126d36b6c-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " pod="service-telemetry/sg-bridge-2-build" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.258219 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/d6179047-d35a-4cad-93c5-2ac126d36b6c-builder-dockercfg-8h4xs-push\") pod \"sg-bridge-2-build\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " pod="service-telemetry/sg-bridge-2-build" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.258246 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d6179047-d35a-4cad-93c5-2ac126d36b6c-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " pod="service-telemetry/sg-bridge-2-build" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.258291 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d6179047-d35a-4cad-93c5-2ac126d36b6c-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " pod="service-telemetry/sg-bridge-2-build" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.258315 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d6179047-d35a-4cad-93c5-2ac126d36b6c-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " pod="service-telemetry/sg-bridge-2-build" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.258351 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d6179047-d35a-4cad-93c5-2ac126d36b6c-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " pod="service-telemetry/sg-bridge-2-build" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.258379 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d6179047-d35a-4cad-93c5-2ac126d36b6c-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " pod="service-telemetry/sg-bridge-2-build" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.258402 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d6179047-d35a-4cad-93c5-2ac126d36b6c-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " pod="service-telemetry/sg-bridge-2-build" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.258440 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d6179047-d35a-4cad-93c5-2ac126d36b6c-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " pod="service-telemetry/sg-bridge-2-build" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.258487 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/d6179047-d35a-4cad-93c5-2ac126d36b6c-builder-dockercfg-8h4xs-pull\") pod \"sg-bridge-2-build\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " pod="service-telemetry/sg-bridge-2-build" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.258850 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d6179047-d35a-4cad-93c5-2ac126d36b6c-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " pod="service-telemetry/sg-bridge-2-build" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.258910 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d6179047-d35a-4cad-93c5-2ac126d36b6c-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " pod="service-telemetry/sg-bridge-2-build" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.259116 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d6179047-d35a-4cad-93c5-2ac126d36b6c-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " pod="service-telemetry/sg-bridge-2-build" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.259228 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d6179047-d35a-4cad-93c5-2ac126d36b6c-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " pod="service-telemetry/sg-bridge-2-build" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.259246 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d6179047-d35a-4cad-93c5-2ac126d36b6c-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " pod="service-telemetry/sg-bridge-2-build" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.259276 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d6179047-d35a-4cad-93c5-2ac126d36b6c-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " pod="service-telemetry/sg-bridge-2-build" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.259281 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d6179047-d35a-4cad-93c5-2ac126d36b6c-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " pod="service-telemetry/sg-bridge-2-build" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.259793 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d6179047-d35a-4cad-93c5-2ac126d36b6c-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " pod="service-telemetry/sg-bridge-2-build" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.259818 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d6179047-d35a-4cad-93c5-2ac126d36b6c-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " pod="service-telemetry/sg-bridge-2-build" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.264623 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/d6179047-d35a-4cad-93c5-2ac126d36b6c-builder-dockercfg-8h4xs-push\") pod \"sg-bridge-2-build\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " pod="service-telemetry/sg-bridge-2-build" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.266741 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/d6179047-d35a-4cad-93c5-2ac126d36b6c-builder-dockercfg-8h4xs-pull\") pod \"sg-bridge-2-build\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " pod="service-telemetry/sg-bridge-2-build" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.275157 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjmqz\" (UniqueName: \"kubernetes.io/projected/d6179047-d35a-4cad-93c5-2ac126d36b6c-kube-api-access-sjmqz\") pod \"sg-bridge-2-build\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " pod="service-telemetry/sg-bridge-2-build" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.391952 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 26 17:21:38 crc kubenswrapper[4856]: I0126 17:21:38.587113 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 26 17:21:39 crc kubenswrapper[4856]: I0126 17:21:39.403288 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d55d4558-c971-4a40-b1cb-41389cbf11c3" path="/var/lib/kubelet/pods/d55d4558-c971-4a40-b1cb-41389cbf11c3/volumes" Jan 26 17:21:39 crc kubenswrapper[4856]: I0126 17:21:39.424300 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"d6179047-d35a-4cad-93c5-2ac126d36b6c","Type":"ContainerStarted","Data":"45628591f87569fa9e26c4179bfa65038a601dfa6e79bf1c2dc16826001a2368"} Jan 26 17:21:39 crc kubenswrapper[4856]: I0126 17:21:39.424357 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"d6179047-d35a-4cad-93c5-2ac126d36b6c","Type":"ContainerStarted","Data":"6a8b6988fa5a1ca6aa116299ba279d842c443263b949816e3eeba14922cdf1f8"} Jan 26 17:21:40 crc kubenswrapper[4856]: I0126 17:21:40.515123 4856 generic.go:334] "Generic (PLEG): container finished" podID="d6179047-d35a-4cad-93c5-2ac126d36b6c" containerID="45628591f87569fa9e26c4179bfa65038a601dfa6e79bf1c2dc16826001a2368" exitCode=0 Jan 26 17:21:40 crc kubenswrapper[4856]: I0126 17:21:40.515218 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"d6179047-d35a-4cad-93c5-2ac126d36b6c","Type":"ContainerDied","Data":"45628591f87569fa9e26c4179bfa65038a601dfa6e79bf1c2dc16826001a2368"} Jan 26 17:21:41 crc kubenswrapper[4856]: I0126 17:21:41.538310 4856 generic.go:334] "Generic (PLEG): container finished" podID="d6179047-d35a-4cad-93c5-2ac126d36b6c" containerID="40e8aa2782ef0e0d6ad31061c662a5d62042428d1e0ef84a274003729ce0e44b" exitCode=0 Jan 26 17:21:41 crc kubenswrapper[4856]: I0126 17:21:41.538359 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"d6179047-d35a-4cad-93c5-2ac126d36b6c","Type":"ContainerDied","Data":"40e8aa2782ef0e0d6ad31061c662a5d62042428d1e0ef84a274003729ce0e44b"} Jan 26 17:21:41 crc kubenswrapper[4856]: I0126 17:21:41.584866 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_d6179047-d35a-4cad-93c5-2ac126d36b6c/manage-dockerfile/0.log" Jan 26 17:21:42 crc kubenswrapper[4856]: I0126 17:21:42.547586 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"d6179047-d35a-4cad-93c5-2ac126d36b6c","Type":"ContainerStarted","Data":"0708c3ea94fa93ca300b24c4b8401e5affed40b2e66cd659a6c94ff571bc0799"} Jan 26 17:21:42 crc kubenswrapper[4856]: I0126 17:21:42.577021 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-2-build" podStartSLOduration=5.576984808 podStartE2EDuration="5.576984808s" podCreationTimestamp="2026-01-26 17:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:21:42.574421494 +0000 UTC m=+1398.527675475" watchObservedRunningTime="2026-01-26 17:21:42.576984808 +0000 UTC m=+1398.530238809" Jan 26 17:22:25 crc kubenswrapper[4856]: E0126 17:22:25.375388 4856 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6179047_d35a_4cad_93c5_2ac126d36b6c.slice/buildah-buildah1089185293\": RecentStats: unable to find data in memory cache]" Jan 26 17:22:26 crc kubenswrapper[4856]: I0126 17:22:26.939656 4856 patch_prober.go:28] interesting pod/machine-config-daemon-xm9cq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:22:26 crc kubenswrapper[4856]: I0126 17:22:26.940310 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:22:31 crc kubenswrapper[4856]: I0126 17:22:31.940825 4856 generic.go:334] "Generic (PLEG): container finished" podID="d6179047-d35a-4cad-93c5-2ac126d36b6c" containerID="0708c3ea94fa93ca300b24c4b8401e5affed40b2e66cd659a6c94ff571bc0799" exitCode=0 Jan 26 17:22:31 crc kubenswrapper[4856]: I0126 17:22:31.940900 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"d6179047-d35a-4cad-93c5-2ac126d36b6c","Type":"ContainerDied","Data":"0708c3ea94fa93ca300b24c4b8401e5affed40b2e66cd659a6c94ff571bc0799"} Jan 26 17:22:33 crc kubenswrapper[4856]: I0126 17:22:33.194187 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 26 17:22:33 crc kubenswrapper[4856]: I0126 17:22:33.304774 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d6179047-d35a-4cad-93c5-2ac126d36b6c-build-blob-cache\") pod \"d6179047-d35a-4cad-93c5-2ac126d36b6c\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " Jan 26 17:22:33 crc kubenswrapper[4856]: I0126 17:22:33.304838 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/d6179047-d35a-4cad-93c5-2ac126d36b6c-builder-dockercfg-8h4xs-push\") pod \"d6179047-d35a-4cad-93c5-2ac126d36b6c\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " Jan 26 17:22:33 crc kubenswrapper[4856]: I0126 17:22:33.304884 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d6179047-d35a-4cad-93c5-2ac126d36b6c-build-ca-bundles\") pod \"d6179047-d35a-4cad-93c5-2ac126d36b6c\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " Jan 26 17:22:33 crc kubenswrapper[4856]: I0126 17:22:33.304938 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d6179047-d35a-4cad-93c5-2ac126d36b6c-container-storage-run\") pod \"d6179047-d35a-4cad-93c5-2ac126d36b6c\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " Jan 26 17:22:33 crc kubenswrapper[4856]: I0126 17:22:33.305837 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d6179047-d35a-4cad-93c5-2ac126d36b6c-build-system-configs\") pod \"d6179047-d35a-4cad-93c5-2ac126d36b6c\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " Jan 26 17:22:33 crc kubenswrapper[4856]: I0126 17:22:33.305876 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d6179047-d35a-4cad-93c5-2ac126d36b6c-buildworkdir\") pod \"d6179047-d35a-4cad-93c5-2ac126d36b6c\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " Jan 26 17:22:33 crc kubenswrapper[4856]: I0126 17:22:33.305899 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d6179047-d35a-4cad-93c5-2ac126d36b6c-build-proxy-ca-bundles\") pod \"d6179047-d35a-4cad-93c5-2ac126d36b6c\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " Jan 26 17:22:33 crc kubenswrapper[4856]: I0126 17:22:33.305923 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d6179047-d35a-4cad-93c5-2ac126d36b6c-buildcachedir\") pod \"d6179047-d35a-4cad-93c5-2ac126d36b6c\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " Jan 26 17:22:33 crc kubenswrapper[4856]: I0126 17:22:33.305998 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjmqz\" (UniqueName: \"kubernetes.io/projected/d6179047-d35a-4cad-93c5-2ac126d36b6c-kube-api-access-sjmqz\") pod \"d6179047-d35a-4cad-93c5-2ac126d36b6c\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " Jan 26 17:22:33 crc kubenswrapper[4856]: I0126 17:22:33.306029 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d6179047-d35a-4cad-93c5-2ac126d36b6c-container-storage-root\") pod \"d6179047-d35a-4cad-93c5-2ac126d36b6c\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " Jan 26 17:22:33 crc kubenswrapper[4856]: I0126 17:22:33.306052 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/d6179047-d35a-4cad-93c5-2ac126d36b6c-builder-dockercfg-8h4xs-pull\") pod \"d6179047-d35a-4cad-93c5-2ac126d36b6c\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " Jan 26 17:22:33 crc kubenswrapper[4856]: I0126 17:22:33.306078 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d6179047-d35a-4cad-93c5-2ac126d36b6c-node-pullsecrets\") pod \"d6179047-d35a-4cad-93c5-2ac126d36b6c\" (UID: \"d6179047-d35a-4cad-93c5-2ac126d36b6c\") " Jan 26 17:22:33 crc kubenswrapper[4856]: I0126 17:22:33.306227 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6179047-d35a-4cad-93c5-2ac126d36b6c-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "d6179047-d35a-4cad-93c5-2ac126d36b6c" (UID: "d6179047-d35a-4cad-93c5-2ac126d36b6c"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:22:33 crc kubenswrapper[4856]: I0126 17:22:33.306345 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6179047-d35a-4cad-93c5-2ac126d36b6c-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "d6179047-d35a-4cad-93c5-2ac126d36b6c" (UID: "d6179047-d35a-4cad-93c5-2ac126d36b6c"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:22:33 crc kubenswrapper[4856]: I0126 17:22:33.306481 4856 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d6179047-d35a-4cad-93c5-2ac126d36b6c-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 26 17:22:33 crc kubenswrapper[4856]: I0126 17:22:33.306586 4856 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d6179047-d35a-4cad-93c5-2ac126d36b6c-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 26 17:22:33 crc kubenswrapper[4856]: I0126 17:22:33.306866 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6179047-d35a-4cad-93c5-2ac126d36b6c-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "d6179047-d35a-4cad-93c5-2ac126d36b6c" (UID: "d6179047-d35a-4cad-93c5-2ac126d36b6c"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:22:33 crc kubenswrapper[4856]: I0126 17:22:33.307089 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6179047-d35a-4cad-93c5-2ac126d36b6c-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "d6179047-d35a-4cad-93c5-2ac126d36b6c" (UID: "d6179047-d35a-4cad-93c5-2ac126d36b6c"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:22:33 crc kubenswrapper[4856]: I0126 17:22:33.307265 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6179047-d35a-4cad-93c5-2ac126d36b6c-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "d6179047-d35a-4cad-93c5-2ac126d36b6c" (UID: "d6179047-d35a-4cad-93c5-2ac126d36b6c"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:22:33 crc kubenswrapper[4856]: I0126 17:22:33.307435 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6179047-d35a-4cad-93c5-2ac126d36b6c-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "d6179047-d35a-4cad-93c5-2ac126d36b6c" (UID: "d6179047-d35a-4cad-93c5-2ac126d36b6c"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:22:33 crc kubenswrapper[4856]: I0126 17:22:33.308508 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6179047-d35a-4cad-93c5-2ac126d36b6c-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "d6179047-d35a-4cad-93c5-2ac126d36b6c" (UID: "d6179047-d35a-4cad-93c5-2ac126d36b6c"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:22:33 crc kubenswrapper[4856]: I0126 17:22:33.316574 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6179047-d35a-4cad-93c5-2ac126d36b6c-builder-dockercfg-8h4xs-push" (OuterVolumeSpecName: "builder-dockercfg-8h4xs-push") pod "d6179047-d35a-4cad-93c5-2ac126d36b6c" (UID: "d6179047-d35a-4cad-93c5-2ac126d36b6c"). InnerVolumeSpecName "builder-dockercfg-8h4xs-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:22:33 crc kubenswrapper[4856]: I0126 17:22:33.333934 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6179047-d35a-4cad-93c5-2ac126d36b6c-kube-api-access-sjmqz" (OuterVolumeSpecName: "kube-api-access-sjmqz") pod "d6179047-d35a-4cad-93c5-2ac126d36b6c" (UID: "d6179047-d35a-4cad-93c5-2ac126d36b6c"). InnerVolumeSpecName "kube-api-access-sjmqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:22:33 crc kubenswrapper[4856]: I0126 17:22:33.341731 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6179047-d35a-4cad-93c5-2ac126d36b6c-builder-dockercfg-8h4xs-pull" (OuterVolumeSpecName: "builder-dockercfg-8h4xs-pull") pod "d6179047-d35a-4cad-93c5-2ac126d36b6c" (UID: "d6179047-d35a-4cad-93c5-2ac126d36b6c"). InnerVolumeSpecName "builder-dockercfg-8h4xs-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:22:33 crc kubenswrapper[4856]: I0126 17:22:33.409391 4856 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/d6179047-d35a-4cad-93c5-2ac126d36b6c-builder-dockercfg-8h4xs-push\") on node \"crc\" DevicePath \"\"" Jan 26 17:22:33 crc kubenswrapper[4856]: I0126 17:22:33.409450 4856 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d6179047-d35a-4cad-93c5-2ac126d36b6c-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 17:22:33 crc kubenswrapper[4856]: I0126 17:22:33.409462 4856 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d6179047-d35a-4cad-93c5-2ac126d36b6c-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 26 17:22:33 crc kubenswrapper[4856]: I0126 17:22:33.409472 4856 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d6179047-d35a-4cad-93c5-2ac126d36b6c-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 26 17:22:33 crc kubenswrapper[4856]: I0126 17:22:33.409484 4856 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d6179047-d35a-4cad-93c5-2ac126d36b6c-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 26 17:22:33 crc kubenswrapper[4856]: I0126 17:22:33.409494 4856 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d6179047-d35a-4cad-93c5-2ac126d36b6c-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 17:22:33 crc kubenswrapper[4856]: I0126 17:22:33.409502 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjmqz\" (UniqueName: \"kubernetes.io/projected/d6179047-d35a-4cad-93c5-2ac126d36b6c-kube-api-access-sjmqz\") on node \"crc\" DevicePath \"\"" Jan 26 17:22:33 crc kubenswrapper[4856]: I0126 17:22:33.409510 4856 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/d6179047-d35a-4cad-93c5-2ac126d36b6c-builder-dockercfg-8h4xs-pull\") on node \"crc\" DevicePath \"\"" Jan 26 17:22:33 crc kubenswrapper[4856]: I0126 17:22:33.559631 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6179047-d35a-4cad-93c5-2ac126d36b6c-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "d6179047-d35a-4cad-93c5-2ac126d36b6c" (UID: "d6179047-d35a-4cad-93c5-2ac126d36b6c"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:22:33 crc kubenswrapper[4856]: I0126 17:22:33.612081 4856 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d6179047-d35a-4cad-93c5-2ac126d36b6c-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 26 17:22:33 crc kubenswrapper[4856]: I0126 17:22:33.958490 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"d6179047-d35a-4cad-93c5-2ac126d36b6c","Type":"ContainerDied","Data":"6a8b6988fa5a1ca6aa116299ba279d842c443263b949816e3eeba14922cdf1f8"} Jan 26 17:22:33 crc kubenswrapper[4856]: I0126 17:22:33.958616 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a8b6988fa5a1ca6aa116299ba279d842c443263b949816e3eeba14922cdf1f8" Jan 26 17:22:33 crc kubenswrapper[4856]: I0126 17:22:33.958628 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 26 17:22:34 crc kubenswrapper[4856]: I0126 17:22:34.044579 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6179047-d35a-4cad-93c5-2ac126d36b6c-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "d6179047-d35a-4cad-93c5-2ac126d36b6c" (UID: "d6179047-d35a-4cad-93c5-2ac126d36b6c"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:22:34 crc kubenswrapper[4856]: I0126 17:22:34.118083 4856 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d6179047-d35a-4cad-93c5-2ac126d36b6c-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.130033 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 26 17:22:38 crc kubenswrapper[4856]: E0126 17:22:38.130646 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6179047-d35a-4cad-93c5-2ac126d36b6c" containerName="git-clone" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.130663 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6179047-d35a-4cad-93c5-2ac126d36b6c" containerName="git-clone" Jan 26 17:22:38 crc kubenswrapper[4856]: E0126 17:22:38.130673 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6179047-d35a-4cad-93c5-2ac126d36b6c" containerName="manage-dockerfile" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.130680 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6179047-d35a-4cad-93c5-2ac126d36b6c" containerName="manage-dockerfile" Jan 26 17:22:38 crc kubenswrapper[4856]: E0126 17:22:38.130697 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6179047-d35a-4cad-93c5-2ac126d36b6c" containerName="docker-build" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.130709 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6179047-d35a-4cad-93c5-2ac126d36b6c" containerName="docker-build" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.130890 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6179047-d35a-4cad-93c5-2ac126d36b6c" containerName="docker-build" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.131646 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.133672 4856 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-8h4xs" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.134105 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-global-ca" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.134188 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-ca" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.134314 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-sys-config" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.150763 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.174091 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.174144 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.174219 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-builder-dockercfg-8h4xs-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.174281 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.174316 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.174361 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.174403 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.174440 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.174477 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-builder-dockercfg-8h4xs-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.174496 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.174512 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.174550 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9fg2\" (UniqueName: \"kubernetes.io/projected/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-kube-api-access-q9fg2\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.275538 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-builder-dockercfg-8h4xs-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.275583 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.275610 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.275635 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9fg2\" (UniqueName: \"kubernetes.io/projected/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-kube-api-access-q9fg2\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.275655 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.275675 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.275695 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-builder-dockercfg-8h4xs-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.275717 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.275737 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.275758 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.275775 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.275793 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.276484 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.276727 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.276904 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.277610 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.277661 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.278080 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.278227 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.278361 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.278569 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.281031 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-builder-dockercfg-8h4xs-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.281307 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-builder-dockercfg-8h4xs-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.296072 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9fg2\" (UniqueName: \"kubernetes.io/projected/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-kube-api-access-q9fg2\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.447130 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.674519 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 26 17:22:38 crc kubenswrapper[4856]: I0126 17:22:38.993484 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0","Type":"ContainerStarted","Data":"3a3db54af766ff6e3ccc317cea1e0c54cb4eb5de533c2e962ee1bc8a2be3b885"} Jan 26 17:22:40 crc kubenswrapper[4856]: I0126 17:22:40.002797 4856 generic.go:334] "Generic (PLEG): container finished" podID="46e64022-34f9-4df3-a5aa-a8b9f20a4cb0" containerID="403583ea8675bdc45ae90876cfddf217f8d8287e642b81234cb02aced617aab6" exitCode=0 Jan 26 17:22:40 crc kubenswrapper[4856]: I0126 17:22:40.002893 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0","Type":"ContainerDied","Data":"403583ea8675bdc45ae90876cfddf217f8d8287e642b81234cb02aced617aab6"} Jan 26 17:22:41 crc kubenswrapper[4856]: I0126 17:22:41.011413 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0","Type":"ContainerStarted","Data":"e5f538bd0f87a5c33bc9d1f0b968a1b8c16df013efd23455a52c388972a23731"} Jan 26 17:22:41 crc kubenswrapper[4856]: I0126 17:22:41.035591 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-1-build" podStartSLOduration=3.035569627 podStartE2EDuration="3.035569627s" podCreationTimestamp="2026-01-26 17:22:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:22:41.031438719 +0000 UTC m=+1456.984692720" watchObservedRunningTime="2026-01-26 17:22:41.035569627 +0000 UTC m=+1456.988823628" Jan 26 17:22:49 crc kubenswrapper[4856]: I0126 17:22:49.239173 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 26 17:22:49 crc kubenswrapper[4856]: I0126 17:22:49.240008 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/prometheus-webhook-snmp-1-build" podUID="46e64022-34f9-4df3-a5aa-a8b9f20a4cb0" containerName="docker-build" containerID="cri-o://e5f538bd0f87a5c33bc9d1f0b968a1b8c16df013efd23455a52c388972a23731" gracePeriod=30 Jan 26 17:22:50 crc kubenswrapper[4856]: I0126 17:22:50.075336 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_46e64022-34f9-4df3-a5aa-a8b9f20a4cb0/docker-build/0.log" Jan 26 17:22:50 crc kubenswrapper[4856]: I0126 17:22:50.076174 4856 generic.go:334] "Generic (PLEG): container finished" podID="46e64022-34f9-4df3-a5aa-a8b9f20a4cb0" containerID="e5f538bd0f87a5c33bc9d1f0b968a1b8c16df013efd23455a52c388972a23731" exitCode=1 Jan 26 17:22:50 crc kubenswrapper[4856]: I0126 17:22:50.076224 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0","Type":"ContainerDied","Data":"e5f538bd0f87a5c33bc9d1f0b968a1b8c16df013efd23455a52c388972a23731"} Jan 26 17:22:50 crc kubenswrapper[4856]: I0126 17:22:50.140878 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_46e64022-34f9-4df3-a5aa-a8b9f20a4cb0/docker-build/0.log" Jan 26 17:22:50 crc kubenswrapper[4856]: I0126 17:22:50.141739 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 26 17:22:50 crc kubenswrapper[4856]: I0126 17:22:50.248591 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-node-pullsecrets\") pod \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " Jan 26 17:22:50 crc kubenswrapper[4856]: I0126 17:22:50.248637 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-buildcachedir\") pod \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " Jan 26 17:22:50 crc kubenswrapper[4856]: I0126 17:22:50.248681 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-container-storage-root\") pod \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " Jan 26 17:22:50 crc kubenswrapper[4856]: I0126 17:22:50.248745 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-build-blob-cache\") pod \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " Jan 26 17:22:50 crc kubenswrapper[4856]: I0126 17:22:50.248743 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "46e64022-34f9-4df3-a5aa-a8b9f20a4cb0" (UID: "46e64022-34f9-4df3-a5aa-a8b9f20a4cb0"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:22:50 crc kubenswrapper[4856]: I0126 17:22:50.248781 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9fg2\" (UniqueName: \"kubernetes.io/projected/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-kube-api-access-q9fg2\") pod \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " Jan 26 17:22:50 crc kubenswrapper[4856]: I0126 17:22:50.248816 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-builder-dockercfg-8h4xs-push\") pod \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " Jan 26 17:22:50 crc kubenswrapper[4856]: I0126 17:22:50.248842 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-buildworkdir\") pod \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " Jan 26 17:22:50 crc kubenswrapper[4856]: I0126 17:22:50.248873 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-build-proxy-ca-bundles\") pod \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " Jan 26 17:22:50 crc kubenswrapper[4856]: I0126 17:22:50.248895 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-builder-dockercfg-8h4xs-pull\") pod \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " Jan 26 17:22:50 crc kubenswrapper[4856]: I0126 17:22:50.248904 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "46e64022-34f9-4df3-a5aa-a8b9f20a4cb0" (UID: "46e64022-34f9-4df3-a5aa-a8b9f20a4cb0"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:22:50 crc kubenswrapper[4856]: I0126 17:22:50.248920 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-build-system-configs\") pod \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " Jan 26 17:22:50 crc kubenswrapper[4856]: I0126 17:22:50.248972 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-container-storage-run\") pod \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " Jan 26 17:22:50 crc kubenswrapper[4856]: I0126 17:22:50.248997 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-build-ca-bundles\") pod \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\" (UID: \"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0\") " Jan 26 17:22:50 crc kubenswrapper[4856]: I0126 17:22:50.249177 4856 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 26 17:22:50 crc kubenswrapper[4856]: I0126 17:22:50.249192 4856 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 26 17:22:50 crc kubenswrapper[4856]: I0126 17:22:50.249413 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "46e64022-34f9-4df3-a5aa-a8b9f20a4cb0" (UID: "46e64022-34f9-4df3-a5aa-a8b9f20a4cb0"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:22:50 crc kubenswrapper[4856]: I0126 17:22:50.249902 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "46e64022-34f9-4df3-a5aa-a8b9f20a4cb0" (UID: "46e64022-34f9-4df3-a5aa-a8b9f20a4cb0"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:22:50 crc kubenswrapper[4856]: I0126 17:22:50.250328 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "46e64022-34f9-4df3-a5aa-a8b9f20a4cb0" (UID: "46e64022-34f9-4df3-a5aa-a8b9f20a4cb0"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:22:50 crc kubenswrapper[4856]: I0126 17:22:50.250880 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "46e64022-34f9-4df3-a5aa-a8b9f20a4cb0" (UID: "46e64022-34f9-4df3-a5aa-a8b9f20a4cb0"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:22:50 crc kubenswrapper[4856]: I0126 17:22:50.251312 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "46e64022-34f9-4df3-a5aa-a8b9f20a4cb0" (UID: "46e64022-34f9-4df3-a5aa-a8b9f20a4cb0"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:22:50 crc kubenswrapper[4856]: I0126 17:22:50.261270 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-builder-dockercfg-8h4xs-push" (OuterVolumeSpecName: "builder-dockercfg-8h4xs-push") pod "46e64022-34f9-4df3-a5aa-a8b9f20a4cb0" (UID: "46e64022-34f9-4df3-a5aa-a8b9f20a4cb0"). InnerVolumeSpecName "builder-dockercfg-8h4xs-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:22:50 crc kubenswrapper[4856]: I0126 17:22:50.261844 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-kube-api-access-q9fg2" (OuterVolumeSpecName: "kube-api-access-q9fg2") pod "46e64022-34f9-4df3-a5aa-a8b9f20a4cb0" (UID: "46e64022-34f9-4df3-a5aa-a8b9f20a4cb0"). InnerVolumeSpecName "kube-api-access-q9fg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:22:50 crc kubenswrapper[4856]: I0126 17:22:50.271574 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-builder-dockercfg-8h4xs-pull" (OuterVolumeSpecName: "builder-dockercfg-8h4xs-pull") pod "46e64022-34f9-4df3-a5aa-a8b9f20a4cb0" (UID: "46e64022-34f9-4df3-a5aa-a8b9f20a4cb0"). InnerVolumeSpecName "builder-dockercfg-8h4xs-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:22:50 crc kubenswrapper[4856]: I0126 17:22:50.321915 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "46e64022-34f9-4df3-a5aa-a8b9f20a4cb0" (UID: "46e64022-34f9-4df3-a5aa-a8b9f20a4cb0"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:22:50 crc kubenswrapper[4856]: I0126 17:22:50.349942 4856 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-builder-dockercfg-8h4xs-push\") on node \"crc\" DevicePath \"\"" Jan 26 17:22:50 crc kubenswrapper[4856]: I0126 17:22:50.349976 4856 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 26 17:22:50 crc kubenswrapper[4856]: I0126 17:22:50.349987 4856 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 17:22:50 crc kubenswrapper[4856]: I0126 17:22:50.349996 4856 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-builder-dockercfg-8h4xs-pull\") on node \"crc\" DevicePath \"\"" Jan 26 17:22:50 crc kubenswrapper[4856]: I0126 17:22:50.350004 4856 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 26 17:22:50 crc kubenswrapper[4856]: I0126 17:22:50.350012 4856 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 26 17:22:50 crc kubenswrapper[4856]: I0126 17:22:50.350020 4856 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 17:22:50 crc kubenswrapper[4856]: I0126 17:22:50.350028 4856 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 26 17:22:50 crc kubenswrapper[4856]: I0126 17:22:50.350036 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9fg2\" (UniqueName: \"kubernetes.io/projected/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-kube-api-access-q9fg2\") on node \"crc\" DevicePath \"\"" Jan 26 17:22:50 crc kubenswrapper[4856]: I0126 17:22:50.655088 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "46e64022-34f9-4df3-a5aa-a8b9f20a4cb0" (UID: "46e64022-34f9-4df3-a5aa-a8b9f20a4cb0"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:22:50 crc kubenswrapper[4856]: I0126 17:22:50.761167 4856 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.084291 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_46e64022-34f9-4df3-a5aa-a8b9f20a4cb0/docker-build/0.log" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.084742 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"46e64022-34f9-4df3-a5aa-a8b9f20a4cb0","Type":"ContainerDied","Data":"3a3db54af766ff6e3ccc317cea1e0c54cb4eb5de533c2e962ee1bc8a2be3b885"} Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.084817 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.084841 4856 scope.go:117] "RemoveContainer" containerID="e5f538bd0f87a5c33bc9d1f0b968a1b8c16df013efd23455a52c388972a23731" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.114678 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.117674 4856 scope.go:117] "RemoveContainer" containerID="403583ea8675bdc45ae90876cfddf217f8d8287e642b81234cb02aced617aab6" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.122380 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.378102 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 26 17:22:51 crc kubenswrapper[4856]: E0126 17:22:51.378800 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46e64022-34f9-4df3-a5aa-a8b9f20a4cb0" containerName="docker-build" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.378819 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="46e64022-34f9-4df3-a5aa-a8b9f20a4cb0" containerName="docker-build" Jan 26 17:22:51 crc kubenswrapper[4856]: E0126 17:22:51.378847 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46e64022-34f9-4df3-a5aa-a8b9f20a4cb0" containerName="manage-dockerfile" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.378858 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="46e64022-34f9-4df3-a5aa-a8b9f20a4cb0" containerName="manage-dockerfile" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.379003 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="46e64022-34f9-4df3-a5aa-a8b9f20a4cb0" containerName="docker-build" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.380105 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.383166 4856 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-8h4xs" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.383760 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-sys-config" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.384612 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-ca" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.384973 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-global-ca" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.406289 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46e64022-34f9-4df3-a5aa-a8b9f20a4cb0" path="/var/lib/kubelet/pods/46e64022-34f9-4df3-a5aa-a8b9f20a4cb0/volumes" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.407265 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.575135 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/605203af-fcdf-42c0-a66f-5c412f8e7770-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.575204 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/605203af-fcdf-42c0-a66f-5c412f8e7770-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.575236 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/605203af-fcdf-42c0-a66f-5c412f8e7770-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.575258 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/605203af-fcdf-42c0-a66f-5c412f8e7770-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.575330 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/605203af-fcdf-42c0-a66f-5c412f8e7770-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.575358 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/605203af-fcdf-42c0-a66f-5c412f8e7770-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.575465 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/605203af-fcdf-42c0-a66f-5c412f8e7770-builder-dockercfg-8h4xs-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.575559 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/605203af-fcdf-42c0-a66f-5c412f8e7770-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.575586 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/605203af-fcdf-42c0-a66f-5c412f8e7770-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.575622 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/605203af-fcdf-42c0-a66f-5c412f8e7770-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.575662 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/605203af-fcdf-42c0-a66f-5c412f8e7770-builder-dockercfg-8h4xs-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.575698 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxrc8\" (UniqueName: \"kubernetes.io/projected/605203af-fcdf-42c0-a66f-5c412f8e7770-kube-api-access-gxrc8\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.676862 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/605203af-fcdf-42c0-a66f-5c412f8e7770-builder-dockercfg-8h4xs-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.677191 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxrc8\" (UniqueName: \"kubernetes.io/projected/605203af-fcdf-42c0-a66f-5c412f8e7770-kube-api-access-gxrc8\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.677288 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/605203af-fcdf-42c0-a66f-5c412f8e7770-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.677395 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/605203af-fcdf-42c0-a66f-5c412f8e7770-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.677479 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/605203af-fcdf-42c0-a66f-5c412f8e7770-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.677585 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/605203af-fcdf-42c0-a66f-5c412f8e7770-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.677644 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/605203af-fcdf-42c0-a66f-5c412f8e7770-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.677669 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/605203af-fcdf-42c0-a66f-5c412f8e7770-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.677517 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/605203af-fcdf-42c0-a66f-5c412f8e7770-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.677852 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/605203af-fcdf-42c0-a66f-5c412f8e7770-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.677821 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/605203af-fcdf-42c0-a66f-5c412f8e7770-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.678004 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/605203af-fcdf-42c0-a66f-5c412f8e7770-builder-dockercfg-8h4xs-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.678074 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/605203af-fcdf-42c0-a66f-5c412f8e7770-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.678113 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/605203af-fcdf-42c0-a66f-5c412f8e7770-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.678164 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/605203af-fcdf-42c0-a66f-5c412f8e7770-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.678222 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/605203af-fcdf-42c0-a66f-5c412f8e7770-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.678381 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/605203af-fcdf-42c0-a66f-5c412f8e7770-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.678983 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/605203af-fcdf-42c0-a66f-5c412f8e7770-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.679269 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/605203af-fcdf-42c0-a66f-5c412f8e7770-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.679330 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/605203af-fcdf-42c0-a66f-5c412f8e7770-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.679907 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/605203af-fcdf-42c0-a66f-5c412f8e7770-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.681940 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/605203af-fcdf-42c0-a66f-5c412f8e7770-builder-dockercfg-8h4xs-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.682807 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/605203af-fcdf-42c0-a66f-5c412f8e7770-builder-dockercfg-8h4xs-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.696226 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxrc8\" (UniqueName: \"kubernetes.io/projected/605203af-fcdf-42c0-a66f-5c412f8e7770-kube-api-access-gxrc8\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.704219 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 26 17:22:51 crc kubenswrapper[4856]: I0126 17:22:51.922053 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 26 17:22:52 crc kubenswrapper[4856]: I0126 17:22:52.092208 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"605203af-fcdf-42c0-a66f-5c412f8e7770","Type":"ContainerStarted","Data":"f9ad774bd179b8409f4822d0009683c738837e6f9f7337a5a04fab814edc853d"} Jan 26 17:22:53 crc kubenswrapper[4856]: I0126 17:22:53.100517 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"605203af-fcdf-42c0-a66f-5c412f8e7770","Type":"ContainerStarted","Data":"ddf87c33308074ead375fc61c59fd29203d99ff38dd4ed590fdf9b02056267ff"} Jan 26 17:22:54 crc kubenswrapper[4856]: I0126 17:22:54.109772 4856 generic.go:334] "Generic (PLEG): container finished" podID="605203af-fcdf-42c0-a66f-5c412f8e7770" containerID="ddf87c33308074ead375fc61c59fd29203d99ff38dd4ed590fdf9b02056267ff" exitCode=0 Jan 26 17:22:54 crc kubenswrapper[4856]: I0126 17:22:54.110003 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"605203af-fcdf-42c0-a66f-5c412f8e7770","Type":"ContainerDied","Data":"ddf87c33308074ead375fc61c59fd29203d99ff38dd4ed590fdf9b02056267ff"} Jan 26 17:22:55 crc kubenswrapper[4856]: I0126 17:22:55.117514 4856 generic.go:334] "Generic (PLEG): container finished" podID="605203af-fcdf-42c0-a66f-5c412f8e7770" containerID="5a26cc5ef603c95f249fb40b3e1e055a5dd3ae0f33c81b0f8dfd906d88554058" exitCode=0 Jan 26 17:22:55 crc kubenswrapper[4856]: I0126 17:22:55.117626 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"605203af-fcdf-42c0-a66f-5c412f8e7770","Type":"ContainerDied","Data":"5a26cc5ef603c95f249fb40b3e1e055a5dd3ae0f33c81b0f8dfd906d88554058"} Jan 26 17:22:55 crc kubenswrapper[4856]: I0126 17:22:55.154182 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_605203af-fcdf-42c0-a66f-5c412f8e7770/manage-dockerfile/0.log" Jan 26 17:22:56 crc kubenswrapper[4856]: I0126 17:22:56.127378 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"605203af-fcdf-42c0-a66f-5c412f8e7770","Type":"ContainerStarted","Data":"173ac321b06d4b28b5b12828ba97f3f373de2fefc48808037e0f118d81499c95"} Jan 26 17:22:56 crc kubenswrapper[4856]: I0126 17:22:56.157580 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-2-build" podStartSLOduration=5.157553043 podStartE2EDuration="5.157553043s" podCreationTimestamp="2026-01-26 17:22:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:22:56.153131486 +0000 UTC m=+1472.106385557" watchObservedRunningTime="2026-01-26 17:22:56.157553043 +0000 UTC m=+1472.110807034" Jan 26 17:22:56 crc kubenswrapper[4856]: I0126 17:22:56.938941 4856 patch_prober.go:28] interesting pod/machine-config-daemon-xm9cq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:22:56 crc kubenswrapper[4856]: I0126 17:22:56.939009 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:23:26 crc kubenswrapper[4856]: I0126 17:23:26.365978 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_605203af-fcdf-42c0-a66f-5c412f8e7770/docker-build/0.log" Jan 26 17:23:26 crc kubenswrapper[4856]: I0126 17:23:26.368390 4856 generic.go:334] "Generic (PLEG): container finished" podID="605203af-fcdf-42c0-a66f-5c412f8e7770" containerID="173ac321b06d4b28b5b12828ba97f3f373de2fefc48808037e0f118d81499c95" exitCode=1 Jan 26 17:23:26 crc kubenswrapper[4856]: I0126 17:23:26.368437 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"605203af-fcdf-42c0-a66f-5c412f8e7770","Type":"ContainerDied","Data":"173ac321b06d4b28b5b12828ba97f3f373de2fefc48808037e0f118d81499c95"} Jan 26 17:23:26 crc kubenswrapper[4856]: I0126 17:23:26.938899 4856 patch_prober.go:28] interesting pod/machine-config-daemon-xm9cq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:23:26 crc kubenswrapper[4856]: I0126 17:23:26.939187 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:23:26 crc kubenswrapper[4856]: I0126 17:23:26.939232 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" Jan 26 17:23:26 crc kubenswrapper[4856]: I0126 17:23:26.939846 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cda3cdbac0b1e3c460ee9a5617b9c5fd59d4db5c67a69b81c9224934be12563c"} pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 17:23:26 crc kubenswrapper[4856]: I0126 17:23:26.939902 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" containerName="machine-config-daemon" containerID="cri-o://cda3cdbac0b1e3c460ee9a5617b9c5fd59d4db5c67a69b81c9224934be12563c" gracePeriod=600 Jan 26 17:23:27 crc kubenswrapper[4856]: I0126 17:23:27.379116 4856 generic.go:334] "Generic (PLEG): container finished" podID="63c75ede-5170-4db0-811b-5217ef8d72b3" containerID="cda3cdbac0b1e3c460ee9a5617b9c5fd59d4db5c67a69b81c9224934be12563c" exitCode=0 Jan 26 17:23:27 crc kubenswrapper[4856]: I0126 17:23:27.379311 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" event={"ID":"63c75ede-5170-4db0-811b-5217ef8d72b3","Type":"ContainerDied","Data":"cda3cdbac0b1e3c460ee9a5617b9c5fd59d4db5c67a69b81c9224934be12563c"} Jan 26 17:23:27 crc kubenswrapper[4856]: I0126 17:23:27.380393 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" event={"ID":"63c75ede-5170-4db0-811b-5217ef8d72b3","Type":"ContainerStarted","Data":"b8175a0e79754a858867d9a98f2aa6c52214536db6005b6724cf907eb7a891ee"} Jan 26 17:23:27 crc kubenswrapper[4856]: I0126 17:23:27.380450 4856 scope.go:117] "RemoveContainer" containerID="5846ab4d870be5fcbab796c3e27690d2c13d129480d6fcd21b3b0d1c535f0cff" Jan 26 17:23:27 crc kubenswrapper[4856]: I0126 17:23:27.702859 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_605203af-fcdf-42c0-a66f-5c412f8e7770/docker-build/0.log" Jan 26 17:23:27 crc kubenswrapper[4856]: I0126 17:23:27.704476 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 26 17:23:27 crc kubenswrapper[4856]: I0126 17:23:27.794311 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxrc8\" (UniqueName: \"kubernetes.io/projected/605203af-fcdf-42c0-a66f-5c412f8e7770-kube-api-access-gxrc8\") pod \"605203af-fcdf-42c0-a66f-5c412f8e7770\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " Jan 26 17:23:27 crc kubenswrapper[4856]: I0126 17:23:27.794368 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/605203af-fcdf-42c0-a66f-5c412f8e7770-container-storage-run\") pod \"605203af-fcdf-42c0-a66f-5c412f8e7770\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " Jan 26 17:23:27 crc kubenswrapper[4856]: I0126 17:23:27.794401 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/605203af-fcdf-42c0-a66f-5c412f8e7770-build-system-configs\") pod \"605203af-fcdf-42c0-a66f-5c412f8e7770\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " Jan 26 17:23:27 crc kubenswrapper[4856]: I0126 17:23:27.794445 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/605203af-fcdf-42c0-a66f-5c412f8e7770-node-pullsecrets\") pod \"605203af-fcdf-42c0-a66f-5c412f8e7770\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " Jan 26 17:23:27 crc kubenswrapper[4856]: I0126 17:23:27.794484 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/605203af-fcdf-42c0-a66f-5c412f8e7770-build-proxy-ca-bundles\") pod \"605203af-fcdf-42c0-a66f-5c412f8e7770\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " Jan 26 17:23:27 crc kubenswrapper[4856]: I0126 17:23:27.794547 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/605203af-fcdf-42c0-a66f-5c412f8e7770-build-blob-cache\") pod \"605203af-fcdf-42c0-a66f-5c412f8e7770\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " Jan 26 17:23:27 crc kubenswrapper[4856]: I0126 17:23:27.794593 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/605203af-fcdf-42c0-a66f-5c412f8e7770-builder-dockercfg-8h4xs-pull\") pod \"605203af-fcdf-42c0-a66f-5c412f8e7770\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " Jan 26 17:23:27 crc kubenswrapper[4856]: I0126 17:23:27.794635 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/605203af-fcdf-42c0-a66f-5c412f8e7770-build-ca-bundles\") pod \"605203af-fcdf-42c0-a66f-5c412f8e7770\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " Jan 26 17:23:27 crc kubenswrapper[4856]: I0126 17:23:27.794655 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/605203af-fcdf-42c0-a66f-5c412f8e7770-builder-dockercfg-8h4xs-push\") pod \"605203af-fcdf-42c0-a66f-5c412f8e7770\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " Jan 26 17:23:27 crc kubenswrapper[4856]: I0126 17:23:27.794674 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/605203af-fcdf-42c0-a66f-5c412f8e7770-container-storage-root\") pod \"605203af-fcdf-42c0-a66f-5c412f8e7770\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " Jan 26 17:23:27 crc kubenswrapper[4856]: I0126 17:23:27.794687 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/605203af-fcdf-42c0-a66f-5c412f8e7770-buildcachedir\") pod \"605203af-fcdf-42c0-a66f-5c412f8e7770\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " Jan 26 17:23:27 crc kubenswrapper[4856]: I0126 17:23:27.794721 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/605203af-fcdf-42c0-a66f-5c412f8e7770-buildworkdir\") pod \"605203af-fcdf-42c0-a66f-5c412f8e7770\" (UID: \"605203af-fcdf-42c0-a66f-5c412f8e7770\") " Jan 26 17:23:27 crc kubenswrapper[4856]: I0126 17:23:27.795037 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/605203af-fcdf-42c0-a66f-5c412f8e7770-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "605203af-fcdf-42c0-a66f-5c412f8e7770" (UID: "605203af-fcdf-42c0-a66f-5c412f8e7770"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:23:27 crc kubenswrapper[4856]: I0126 17:23:27.795270 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/605203af-fcdf-42c0-a66f-5c412f8e7770-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "605203af-fcdf-42c0-a66f-5c412f8e7770" (UID: "605203af-fcdf-42c0-a66f-5c412f8e7770"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:23:27 crc kubenswrapper[4856]: I0126 17:23:27.795476 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/605203af-fcdf-42c0-a66f-5c412f8e7770-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "605203af-fcdf-42c0-a66f-5c412f8e7770" (UID: "605203af-fcdf-42c0-a66f-5c412f8e7770"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:23:27 crc kubenswrapper[4856]: I0126 17:23:27.795620 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/605203af-fcdf-42c0-a66f-5c412f8e7770-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "605203af-fcdf-42c0-a66f-5c412f8e7770" (UID: "605203af-fcdf-42c0-a66f-5c412f8e7770"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:23:27 crc kubenswrapper[4856]: I0126 17:23:27.794945 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/605203af-fcdf-42c0-a66f-5c412f8e7770-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "605203af-fcdf-42c0-a66f-5c412f8e7770" (UID: "605203af-fcdf-42c0-a66f-5c412f8e7770"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:23:27 crc kubenswrapper[4856]: I0126 17:23:27.797189 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/605203af-fcdf-42c0-a66f-5c412f8e7770-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "605203af-fcdf-42c0-a66f-5c412f8e7770" (UID: "605203af-fcdf-42c0-a66f-5c412f8e7770"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:23:27 crc kubenswrapper[4856]: I0126 17:23:27.798005 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/605203af-fcdf-42c0-a66f-5c412f8e7770-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "605203af-fcdf-42c0-a66f-5c412f8e7770" (UID: "605203af-fcdf-42c0-a66f-5c412f8e7770"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:23:27 crc kubenswrapper[4856]: I0126 17:23:27.801883 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/605203af-fcdf-42c0-a66f-5c412f8e7770-builder-dockercfg-8h4xs-pull" (OuterVolumeSpecName: "builder-dockercfg-8h4xs-pull") pod "605203af-fcdf-42c0-a66f-5c412f8e7770" (UID: "605203af-fcdf-42c0-a66f-5c412f8e7770"). InnerVolumeSpecName "builder-dockercfg-8h4xs-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:23:27 crc kubenswrapper[4856]: I0126 17:23:27.804148 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/605203af-fcdf-42c0-a66f-5c412f8e7770-kube-api-access-gxrc8" (OuterVolumeSpecName: "kube-api-access-gxrc8") pod "605203af-fcdf-42c0-a66f-5c412f8e7770" (UID: "605203af-fcdf-42c0-a66f-5c412f8e7770"). InnerVolumeSpecName "kube-api-access-gxrc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:23:27 crc kubenswrapper[4856]: I0126 17:23:27.807901 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/605203af-fcdf-42c0-a66f-5c412f8e7770-builder-dockercfg-8h4xs-push" (OuterVolumeSpecName: "builder-dockercfg-8h4xs-push") pod "605203af-fcdf-42c0-a66f-5c412f8e7770" (UID: "605203af-fcdf-42c0-a66f-5c412f8e7770"). InnerVolumeSpecName "builder-dockercfg-8h4xs-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:23:27 crc kubenswrapper[4856]: I0126 17:23:27.896661 4856 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/605203af-fcdf-42c0-a66f-5c412f8e7770-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 26 17:23:27 crc kubenswrapper[4856]: I0126 17:23:27.896914 4856 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/605203af-fcdf-42c0-a66f-5c412f8e7770-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 17:23:27 crc kubenswrapper[4856]: I0126 17:23:27.897026 4856 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/605203af-fcdf-42c0-a66f-5c412f8e7770-builder-dockercfg-8h4xs-pull\") on node \"crc\" DevicePath \"\"" Jan 26 17:23:27 crc kubenswrapper[4856]: I0126 17:23:27.897113 4856 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/605203af-fcdf-42c0-a66f-5c412f8e7770-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 17:23:27 crc kubenswrapper[4856]: I0126 17:23:27.897193 4856 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/605203af-fcdf-42c0-a66f-5c412f8e7770-builder-dockercfg-8h4xs-push\") on node \"crc\" DevicePath \"\"" Jan 26 17:23:27 crc kubenswrapper[4856]: I0126 17:23:27.897286 4856 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/605203af-fcdf-42c0-a66f-5c412f8e7770-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 26 17:23:27 crc kubenswrapper[4856]: I0126 17:23:27.897367 4856 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/605203af-fcdf-42c0-a66f-5c412f8e7770-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 26 17:23:27 crc kubenswrapper[4856]: I0126 17:23:27.897452 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxrc8\" (UniqueName: \"kubernetes.io/projected/605203af-fcdf-42c0-a66f-5c412f8e7770-kube-api-access-gxrc8\") on node \"crc\" DevicePath \"\"" Jan 26 17:23:27 crc kubenswrapper[4856]: I0126 17:23:27.897554 4856 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/605203af-fcdf-42c0-a66f-5c412f8e7770-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 26 17:23:27 crc kubenswrapper[4856]: I0126 17:23:27.897649 4856 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/605203af-fcdf-42c0-a66f-5c412f8e7770-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 26 17:23:27 crc kubenswrapper[4856]: I0126 17:23:27.904829 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/605203af-fcdf-42c0-a66f-5c412f8e7770-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "605203af-fcdf-42c0-a66f-5c412f8e7770" (UID: "605203af-fcdf-42c0-a66f-5c412f8e7770"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:23:27 crc kubenswrapper[4856]: I0126 17:23:27.999748 4856 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/605203af-fcdf-42c0-a66f-5c412f8e7770-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 26 17:23:28 crc kubenswrapper[4856]: I0126 17:23:28.359959 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/605203af-fcdf-42c0-a66f-5c412f8e7770-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "605203af-fcdf-42c0-a66f-5c412f8e7770" (UID: "605203af-fcdf-42c0-a66f-5c412f8e7770"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:23:28 crc kubenswrapper[4856]: I0126 17:23:28.396602 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_605203af-fcdf-42c0-a66f-5c412f8e7770/docker-build/0.log" Jan 26 17:23:28 crc kubenswrapper[4856]: I0126 17:23:28.398309 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"605203af-fcdf-42c0-a66f-5c412f8e7770","Type":"ContainerDied","Data":"f9ad774bd179b8409f4822d0009683c738837e6f9f7337a5a04fab814edc853d"} Jan 26 17:23:28 crc kubenswrapper[4856]: I0126 17:23:28.398358 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9ad774bd179b8409f4822d0009683c738837e6f9f7337a5a04fab814edc853d" Jan 26 17:23:28 crc kubenswrapper[4856]: I0126 17:23:28.398446 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 26 17:23:28 crc kubenswrapper[4856]: I0126 17:23:28.405463 4856 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/605203af-fcdf-42c0-a66f-5c412f8e7770-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.084616 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-3-build"] Jan 26 17:23:38 crc kubenswrapper[4856]: E0126 17:23:38.086425 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="605203af-fcdf-42c0-a66f-5c412f8e7770" containerName="git-clone" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.086458 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="605203af-fcdf-42c0-a66f-5c412f8e7770" containerName="git-clone" Jan 26 17:23:38 crc kubenswrapper[4856]: E0126 17:23:38.086490 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="605203af-fcdf-42c0-a66f-5c412f8e7770" containerName="docker-build" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.086507 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="605203af-fcdf-42c0-a66f-5c412f8e7770" containerName="docker-build" Jan 26 17:23:38 crc kubenswrapper[4856]: E0126 17:23:38.086564 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="605203af-fcdf-42c0-a66f-5c412f8e7770" containerName="manage-dockerfile" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.086580 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="605203af-fcdf-42c0-a66f-5c412f8e7770" containerName="manage-dockerfile" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.086822 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="605203af-fcdf-42c0-a66f-5c412f8e7770" containerName="docker-build" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.088269 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-3-build" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.090731 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-3-sys-config" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.091972 4856 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-8h4xs" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.093309 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-3-global-ca" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.093775 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-3-ca" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.105618 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-3-build"] Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.269290 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c15c4956-0479-4646-86c9-ca4a7ce31a28-build-blob-cache\") pod \"prometheus-webhook-snmp-3-build\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " pod="service-telemetry/prometheus-webhook-snmp-3-build" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.269386 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c15c4956-0479-4646-86c9-ca4a7ce31a28-buildcachedir\") pod \"prometheus-webhook-snmp-3-build\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " pod="service-telemetry/prometheus-webhook-snmp-3-build" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.269455 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c15c4956-0479-4646-86c9-ca4a7ce31a28-container-storage-run\") pod \"prometheus-webhook-snmp-3-build\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " pod="service-telemetry/prometheus-webhook-snmp-3-build" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.269589 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s28tl\" (UniqueName: \"kubernetes.io/projected/c15c4956-0479-4646-86c9-ca4a7ce31a28-kube-api-access-s28tl\") pod \"prometheus-webhook-snmp-3-build\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " pod="service-telemetry/prometheus-webhook-snmp-3-build" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.269663 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c15c4956-0479-4646-86c9-ca4a7ce31a28-node-pullsecrets\") pod \"prometheus-webhook-snmp-3-build\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " pod="service-telemetry/prometheus-webhook-snmp-3-build" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.269737 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c15c4956-0479-4646-86c9-ca4a7ce31a28-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-3-build\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " pod="service-telemetry/prometheus-webhook-snmp-3-build" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.269825 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c15c4956-0479-4646-86c9-ca4a7ce31a28-build-ca-bundles\") pod \"prometheus-webhook-snmp-3-build\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " pod="service-telemetry/prometheus-webhook-snmp-3-build" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.269894 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c15c4956-0479-4646-86c9-ca4a7ce31a28-container-storage-root\") pod \"prometheus-webhook-snmp-3-build\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " pod="service-telemetry/prometheus-webhook-snmp-3-build" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.269970 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/c15c4956-0479-4646-86c9-ca4a7ce31a28-builder-dockercfg-8h4xs-pull\") pod \"prometheus-webhook-snmp-3-build\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " pod="service-telemetry/prometheus-webhook-snmp-3-build" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.270029 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c15c4956-0479-4646-86c9-ca4a7ce31a28-build-system-configs\") pod \"prometheus-webhook-snmp-3-build\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " pod="service-telemetry/prometheus-webhook-snmp-3-build" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.270124 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/c15c4956-0479-4646-86c9-ca4a7ce31a28-builder-dockercfg-8h4xs-push\") pod \"prometheus-webhook-snmp-3-build\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " pod="service-telemetry/prometheus-webhook-snmp-3-build" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.270238 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c15c4956-0479-4646-86c9-ca4a7ce31a28-buildworkdir\") pod \"prometheus-webhook-snmp-3-build\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " pod="service-telemetry/prometheus-webhook-snmp-3-build" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.371821 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c15c4956-0479-4646-86c9-ca4a7ce31a28-container-storage-root\") pod \"prometheus-webhook-snmp-3-build\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " pod="service-telemetry/prometheus-webhook-snmp-3-build" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.371880 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/c15c4956-0479-4646-86c9-ca4a7ce31a28-builder-dockercfg-8h4xs-pull\") pod \"prometheus-webhook-snmp-3-build\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " pod="service-telemetry/prometheus-webhook-snmp-3-build" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.371900 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c15c4956-0479-4646-86c9-ca4a7ce31a28-build-system-configs\") pod \"prometheus-webhook-snmp-3-build\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " pod="service-telemetry/prometheus-webhook-snmp-3-build" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.371922 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/c15c4956-0479-4646-86c9-ca4a7ce31a28-builder-dockercfg-8h4xs-push\") pod \"prometheus-webhook-snmp-3-build\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " pod="service-telemetry/prometheus-webhook-snmp-3-build" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.371961 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c15c4956-0479-4646-86c9-ca4a7ce31a28-buildworkdir\") pod \"prometheus-webhook-snmp-3-build\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " pod="service-telemetry/prometheus-webhook-snmp-3-build" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.371985 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c15c4956-0479-4646-86c9-ca4a7ce31a28-build-blob-cache\") pod \"prometheus-webhook-snmp-3-build\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " pod="service-telemetry/prometheus-webhook-snmp-3-build" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.372004 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c15c4956-0479-4646-86c9-ca4a7ce31a28-buildcachedir\") pod \"prometheus-webhook-snmp-3-build\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " pod="service-telemetry/prometheus-webhook-snmp-3-build" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.372027 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c15c4956-0479-4646-86c9-ca4a7ce31a28-container-storage-run\") pod \"prometheus-webhook-snmp-3-build\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " pod="service-telemetry/prometheus-webhook-snmp-3-build" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.372042 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s28tl\" (UniqueName: \"kubernetes.io/projected/c15c4956-0479-4646-86c9-ca4a7ce31a28-kube-api-access-s28tl\") pod \"prometheus-webhook-snmp-3-build\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " pod="service-telemetry/prometheus-webhook-snmp-3-build" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.372064 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c15c4956-0479-4646-86c9-ca4a7ce31a28-node-pullsecrets\") pod \"prometheus-webhook-snmp-3-build\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " pod="service-telemetry/prometheus-webhook-snmp-3-build" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.372084 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c15c4956-0479-4646-86c9-ca4a7ce31a28-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-3-build\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " pod="service-telemetry/prometheus-webhook-snmp-3-build" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.372102 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c15c4956-0479-4646-86c9-ca4a7ce31a28-build-ca-bundles\") pod \"prometheus-webhook-snmp-3-build\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " pod="service-telemetry/prometheus-webhook-snmp-3-build" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.372371 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c15c4956-0479-4646-86c9-ca4a7ce31a28-container-storage-root\") pod \"prometheus-webhook-snmp-3-build\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " pod="service-telemetry/prometheus-webhook-snmp-3-build" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.372433 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c15c4956-0479-4646-86c9-ca4a7ce31a28-buildcachedir\") pod \"prometheus-webhook-snmp-3-build\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " pod="service-telemetry/prometheus-webhook-snmp-3-build" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.372830 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c15c4956-0479-4646-86c9-ca4a7ce31a28-build-ca-bundles\") pod \"prometheus-webhook-snmp-3-build\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " pod="service-telemetry/prometheus-webhook-snmp-3-build" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.372894 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c15c4956-0479-4646-86c9-ca4a7ce31a28-build-system-configs\") pod \"prometheus-webhook-snmp-3-build\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " pod="service-telemetry/prometheus-webhook-snmp-3-build" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.373008 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c15c4956-0479-4646-86c9-ca4a7ce31a28-container-storage-run\") pod \"prometheus-webhook-snmp-3-build\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " pod="service-telemetry/prometheus-webhook-snmp-3-build" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.373006 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c15c4956-0479-4646-86c9-ca4a7ce31a28-node-pullsecrets\") pod \"prometheus-webhook-snmp-3-build\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " pod="service-telemetry/prometheus-webhook-snmp-3-build" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.373186 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c15c4956-0479-4646-86c9-ca4a7ce31a28-build-blob-cache\") pod \"prometheus-webhook-snmp-3-build\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " pod="service-telemetry/prometheus-webhook-snmp-3-build" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.373660 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c15c4956-0479-4646-86c9-ca4a7ce31a28-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-3-build\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " pod="service-telemetry/prometheus-webhook-snmp-3-build" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.374233 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c15c4956-0479-4646-86c9-ca4a7ce31a28-buildworkdir\") pod \"prometheus-webhook-snmp-3-build\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " pod="service-telemetry/prometheus-webhook-snmp-3-build" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.378107 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/c15c4956-0479-4646-86c9-ca4a7ce31a28-builder-dockercfg-8h4xs-push\") pod \"prometheus-webhook-snmp-3-build\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " pod="service-telemetry/prometheus-webhook-snmp-3-build" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.389809 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/c15c4956-0479-4646-86c9-ca4a7ce31a28-builder-dockercfg-8h4xs-pull\") pod \"prometheus-webhook-snmp-3-build\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " pod="service-telemetry/prometheus-webhook-snmp-3-build" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.400332 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s28tl\" (UniqueName: \"kubernetes.io/projected/c15c4956-0479-4646-86c9-ca4a7ce31a28-kube-api-access-s28tl\") pod \"prometheus-webhook-snmp-3-build\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " pod="service-telemetry/prometheus-webhook-snmp-3-build" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.409163 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-3-build" Jan 26 17:23:38 crc kubenswrapper[4856]: I0126 17:23:38.644739 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-3-build"] Jan 26 17:23:39 crc kubenswrapper[4856]: I0126 17:23:39.497345 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-3-build" event={"ID":"c15c4956-0479-4646-86c9-ca4a7ce31a28","Type":"ContainerStarted","Data":"187f5c587ecc9aa2039c8f93aecae55132826926fc216d1bfc5a59d963e568e4"} Jan 26 17:23:39 crc kubenswrapper[4856]: I0126 17:23:39.497726 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-3-build" event={"ID":"c15c4956-0479-4646-86c9-ca4a7ce31a28","Type":"ContainerStarted","Data":"04562fdb1bb25c90ae0571bf68dcbe10145ddf06b7e4caf1d7fb642924f80caf"} Jan 26 17:23:40 crc kubenswrapper[4856]: I0126 17:23:40.505610 4856 generic.go:334] "Generic (PLEG): container finished" podID="c15c4956-0479-4646-86c9-ca4a7ce31a28" containerID="187f5c587ecc9aa2039c8f93aecae55132826926fc216d1bfc5a59d963e568e4" exitCode=0 Jan 26 17:23:40 crc kubenswrapper[4856]: I0126 17:23:40.505739 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-3-build" event={"ID":"c15c4956-0479-4646-86c9-ca4a7ce31a28","Type":"ContainerDied","Data":"187f5c587ecc9aa2039c8f93aecae55132826926fc216d1bfc5a59d963e568e4"} Jan 26 17:23:41 crc kubenswrapper[4856]: I0126 17:23:41.515013 4856 generic.go:334] "Generic (PLEG): container finished" podID="c15c4956-0479-4646-86c9-ca4a7ce31a28" containerID="4f96c5d3e3aa29c56225101e8cebf73a3825ff877c4c07e8e94cd75366ff9736" exitCode=0 Jan 26 17:23:41 crc kubenswrapper[4856]: I0126 17:23:41.515063 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-3-build" event={"ID":"c15c4956-0479-4646-86c9-ca4a7ce31a28","Type":"ContainerDied","Data":"4f96c5d3e3aa29c56225101e8cebf73a3825ff877c4c07e8e94cd75366ff9736"} Jan 26 17:23:41 crc kubenswrapper[4856]: I0126 17:23:41.564218 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-3-build_c15c4956-0479-4646-86c9-ca4a7ce31a28/manage-dockerfile/0.log" Jan 26 17:23:42 crc kubenswrapper[4856]: I0126 17:23:42.525313 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-3-build" event={"ID":"c15c4956-0479-4646-86c9-ca4a7ce31a28","Type":"ContainerStarted","Data":"3ecfe1e65b08426eb861eac05b29172d1abeeaf5b3a7ab21586bd8ce360d51d3"} Jan 26 17:23:42 crc kubenswrapper[4856]: I0126 17:23:42.621877 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-3-build" podStartSLOduration=5.621838745 podStartE2EDuration="5.621838745s" podCreationTimestamp="2026-01-26 17:23:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:23:42.588935834 +0000 UTC m=+1518.542189835" watchObservedRunningTime="2026-01-26 17:23:42.621838745 +0000 UTC m=+1518.575092726" Jan 26 17:23:58 crc kubenswrapper[4856]: I0126 17:23:58.407096 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hpsnz"] Jan 26 17:23:58 crc kubenswrapper[4856]: I0126 17:23:58.409431 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hpsnz" Jan 26 17:23:58 crc kubenswrapper[4856]: I0126 17:23:58.421633 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hpsnz"] Jan 26 17:23:58 crc kubenswrapper[4856]: I0126 17:23:58.476422 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07f01f78-5312-4b86-8c59-2c02f054f99d-utilities\") pod \"redhat-operators-hpsnz\" (UID: \"07f01f78-5312-4b86-8c59-2c02f054f99d\") " pod="openshift-marketplace/redhat-operators-hpsnz" Jan 26 17:23:58 crc kubenswrapper[4856]: I0126 17:23:58.476593 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07f01f78-5312-4b86-8c59-2c02f054f99d-catalog-content\") pod \"redhat-operators-hpsnz\" (UID: \"07f01f78-5312-4b86-8c59-2c02f054f99d\") " pod="openshift-marketplace/redhat-operators-hpsnz" Jan 26 17:23:58 crc kubenswrapper[4856]: I0126 17:23:58.476630 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlj8b\" (UniqueName: \"kubernetes.io/projected/07f01f78-5312-4b86-8c59-2c02f054f99d-kube-api-access-xlj8b\") pod \"redhat-operators-hpsnz\" (UID: \"07f01f78-5312-4b86-8c59-2c02f054f99d\") " pod="openshift-marketplace/redhat-operators-hpsnz" Jan 26 17:23:58 crc kubenswrapper[4856]: I0126 17:23:58.578155 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlj8b\" (UniqueName: \"kubernetes.io/projected/07f01f78-5312-4b86-8c59-2c02f054f99d-kube-api-access-xlj8b\") pod \"redhat-operators-hpsnz\" (UID: \"07f01f78-5312-4b86-8c59-2c02f054f99d\") " pod="openshift-marketplace/redhat-operators-hpsnz" Jan 26 17:23:58 crc kubenswrapper[4856]: I0126 17:23:58.578303 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07f01f78-5312-4b86-8c59-2c02f054f99d-utilities\") pod \"redhat-operators-hpsnz\" (UID: \"07f01f78-5312-4b86-8c59-2c02f054f99d\") " pod="openshift-marketplace/redhat-operators-hpsnz" Jan 26 17:23:58 crc kubenswrapper[4856]: I0126 17:23:58.578376 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07f01f78-5312-4b86-8c59-2c02f054f99d-catalog-content\") pod \"redhat-operators-hpsnz\" (UID: \"07f01f78-5312-4b86-8c59-2c02f054f99d\") " pod="openshift-marketplace/redhat-operators-hpsnz" Jan 26 17:23:58 crc kubenswrapper[4856]: I0126 17:23:58.578932 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07f01f78-5312-4b86-8c59-2c02f054f99d-catalog-content\") pod \"redhat-operators-hpsnz\" (UID: \"07f01f78-5312-4b86-8c59-2c02f054f99d\") " pod="openshift-marketplace/redhat-operators-hpsnz" Jan 26 17:23:58 crc kubenswrapper[4856]: I0126 17:23:58.579079 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07f01f78-5312-4b86-8c59-2c02f054f99d-utilities\") pod \"redhat-operators-hpsnz\" (UID: \"07f01f78-5312-4b86-8c59-2c02f054f99d\") " pod="openshift-marketplace/redhat-operators-hpsnz" Jan 26 17:23:58 crc kubenswrapper[4856]: I0126 17:23:58.597902 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlj8b\" (UniqueName: \"kubernetes.io/projected/07f01f78-5312-4b86-8c59-2c02f054f99d-kube-api-access-xlj8b\") pod \"redhat-operators-hpsnz\" (UID: \"07f01f78-5312-4b86-8c59-2c02f054f99d\") " pod="openshift-marketplace/redhat-operators-hpsnz" Jan 26 17:23:58 crc kubenswrapper[4856]: I0126 17:23:58.730063 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hpsnz" Jan 26 17:23:59 crc kubenswrapper[4856]: I0126 17:23:59.131164 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hpsnz"] Jan 26 17:23:59 crc kubenswrapper[4856]: I0126 17:23:59.656023 4856 generic.go:334] "Generic (PLEG): container finished" podID="07f01f78-5312-4b86-8c59-2c02f054f99d" containerID="65ff28a459f4ead12ab9875ba11141cd0dd1d47926b020bafc7cd061527be0b4" exitCode=0 Jan 26 17:23:59 crc kubenswrapper[4856]: I0126 17:23:59.656069 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpsnz" event={"ID":"07f01f78-5312-4b86-8c59-2c02f054f99d","Type":"ContainerDied","Data":"65ff28a459f4ead12ab9875ba11141cd0dd1d47926b020bafc7cd061527be0b4"} Jan 26 17:23:59 crc kubenswrapper[4856]: I0126 17:23:59.656097 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpsnz" event={"ID":"07f01f78-5312-4b86-8c59-2c02f054f99d","Type":"ContainerStarted","Data":"2b7f525a65ab979c2adc1221856dc62b10204477ecabfc28c50af78293c15e81"} Jan 26 17:23:59 crc kubenswrapper[4856]: I0126 17:23:59.658577 4856 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 17:24:00 crc kubenswrapper[4856]: I0126 17:24:00.665547 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpsnz" event={"ID":"07f01f78-5312-4b86-8c59-2c02f054f99d","Type":"ContainerStarted","Data":"4c9c83f945a435941423e915060ee89b3979fe0baafd5195d79d8cf074728770"} Jan 26 17:24:01 crc kubenswrapper[4856]: I0126 17:24:01.677055 4856 generic.go:334] "Generic (PLEG): container finished" podID="07f01f78-5312-4b86-8c59-2c02f054f99d" containerID="4c9c83f945a435941423e915060ee89b3979fe0baafd5195d79d8cf074728770" exitCode=0 Jan 26 17:24:01 crc kubenswrapper[4856]: I0126 17:24:01.677180 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpsnz" event={"ID":"07f01f78-5312-4b86-8c59-2c02f054f99d","Type":"ContainerDied","Data":"4c9c83f945a435941423e915060ee89b3979fe0baafd5195d79d8cf074728770"} Jan 26 17:24:02 crc kubenswrapper[4856]: I0126 17:24:02.686702 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpsnz" event={"ID":"07f01f78-5312-4b86-8c59-2c02f054f99d","Type":"ContainerStarted","Data":"13ab3de3d22799488895b01d04fac36f5b9da2ddb5cd8acff648fd53826c28b3"} Jan 26 17:24:03 crc kubenswrapper[4856]: I0126 17:24:03.715227 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hpsnz" podStartSLOduration=3.152016927 podStartE2EDuration="5.715200982s" podCreationTimestamp="2026-01-26 17:23:58 +0000 UTC" firstStartedPulling="2026-01-26 17:23:59.658286876 +0000 UTC m=+1535.611540857" lastFinishedPulling="2026-01-26 17:24:02.221470931 +0000 UTC m=+1538.174724912" observedRunningTime="2026-01-26 17:24:03.713829533 +0000 UTC m=+1539.667083584" watchObservedRunningTime="2026-01-26 17:24:03.715200982 +0000 UTC m=+1539.668454973" Jan 26 17:24:08 crc kubenswrapper[4856]: I0126 17:24:08.731100 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hpsnz" Jan 26 17:24:08 crc kubenswrapper[4856]: I0126 17:24:08.731717 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hpsnz" Jan 26 17:24:08 crc kubenswrapper[4856]: I0126 17:24:08.776252 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hpsnz" Jan 26 17:24:09 crc kubenswrapper[4856]: I0126 17:24:09.777407 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hpsnz" Jan 26 17:24:09 crc kubenswrapper[4856]: I0126 17:24:09.823033 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hpsnz"] Jan 26 17:24:11 crc kubenswrapper[4856]: I0126 17:24:11.745371 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hpsnz" podUID="07f01f78-5312-4b86-8c59-2c02f054f99d" containerName="registry-server" containerID="cri-o://13ab3de3d22799488895b01d04fac36f5b9da2ddb5cd8acff648fd53826c28b3" gracePeriod=2 Jan 26 17:24:15 crc kubenswrapper[4856]: I0126 17:24:15.659942 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hpsnz" Jan 26 17:24:15 crc kubenswrapper[4856]: I0126 17:24:15.726726 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07f01f78-5312-4b86-8c59-2c02f054f99d-utilities\") pod \"07f01f78-5312-4b86-8c59-2c02f054f99d\" (UID: \"07f01f78-5312-4b86-8c59-2c02f054f99d\") " Jan 26 17:24:15 crc kubenswrapper[4856]: I0126 17:24:15.726900 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlj8b\" (UniqueName: \"kubernetes.io/projected/07f01f78-5312-4b86-8c59-2c02f054f99d-kube-api-access-xlj8b\") pod \"07f01f78-5312-4b86-8c59-2c02f054f99d\" (UID: \"07f01f78-5312-4b86-8c59-2c02f054f99d\") " Jan 26 17:24:15 crc kubenswrapper[4856]: I0126 17:24:15.727003 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07f01f78-5312-4b86-8c59-2c02f054f99d-catalog-content\") pod \"07f01f78-5312-4b86-8c59-2c02f054f99d\" (UID: \"07f01f78-5312-4b86-8c59-2c02f054f99d\") " Jan 26 17:24:15 crc kubenswrapper[4856]: I0126 17:24:15.732067 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07f01f78-5312-4b86-8c59-2c02f054f99d-utilities" (OuterVolumeSpecName: "utilities") pod "07f01f78-5312-4b86-8c59-2c02f054f99d" (UID: "07f01f78-5312-4b86-8c59-2c02f054f99d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:24:15 crc kubenswrapper[4856]: I0126 17:24:15.737416 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07f01f78-5312-4b86-8c59-2c02f054f99d-kube-api-access-xlj8b" (OuterVolumeSpecName: "kube-api-access-xlj8b") pod "07f01f78-5312-4b86-8c59-2c02f054f99d" (UID: "07f01f78-5312-4b86-8c59-2c02f054f99d"). InnerVolumeSpecName "kube-api-access-xlj8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:24:15 crc kubenswrapper[4856]: I0126 17:24:15.786194 4856 generic.go:334] "Generic (PLEG): container finished" podID="07f01f78-5312-4b86-8c59-2c02f054f99d" containerID="13ab3de3d22799488895b01d04fac36f5b9da2ddb5cd8acff648fd53826c28b3" exitCode=0 Jan 26 17:24:15 crc kubenswrapper[4856]: I0126 17:24:15.786280 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpsnz" event={"ID":"07f01f78-5312-4b86-8c59-2c02f054f99d","Type":"ContainerDied","Data":"13ab3de3d22799488895b01d04fac36f5b9da2ddb5cd8acff648fd53826c28b3"} Jan 26 17:24:15 crc kubenswrapper[4856]: I0126 17:24:15.786401 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpsnz" event={"ID":"07f01f78-5312-4b86-8c59-2c02f054f99d","Type":"ContainerDied","Data":"2b7f525a65ab979c2adc1221856dc62b10204477ecabfc28c50af78293c15e81"} Jan 26 17:24:15 crc kubenswrapper[4856]: I0126 17:24:15.786384 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hpsnz" Jan 26 17:24:15 crc kubenswrapper[4856]: I0126 17:24:15.786433 4856 scope.go:117] "RemoveContainer" containerID="13ab3de3d22799488895b01d04fac36f5b9da2ddb5cd8acff648fd53826c28b3" Jan 26 17:24:15 crc kubenswrapper[4856]: I0126 17:24:15.814665 4856 scope.go:117] "RemoveContainer" containerID="4c9c83f945a435941423e915060ee89b3979fe0baafd5195d79d8cf074728770" Jan 26 17:24:15 crc kubenswrapper[4856]: I0126 17:24:15.829591 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07f01f78-5312-4b86-8c59-2c02f054f99d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:24:15 crc kubenswrapper[4856]: I0126 17:24:15.829695 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlj8b\" (UniqueName: \"kubernetes.io/projected/07f01f78-5312-4b86-8c59-2c02f054f99d-kube-api-access-xlj8b\") on node \"crc\" DevicePath \"\"" Jan 26 17:24:15 crc kubenswrapper[4856]: I0126 17:24:15.846137 4856 scope.go:117] "RemoveContainer" containerID="65ff28a459f4ead12ab9875ba11141cd0dd1d47926b020bafc7cd061527be0b4" Jan 26 17:24:15 crc kubenswrapper[4856]: I0126 17:24:15.883091 4856 scope.go:117] "RemoveContainer" containerID="13ab3de3d22799488895b01d04fac36f5b9da2ddb5cd8acff648fd53826c28b3" Jan 26 17:24:15 crc kubenswrapper[4856]: E0126 17:24:15.885023 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13ab3de3d22799488895b01d04fac36f5b9da2ddb5cd8acff648fd53826c28b3\": container with ID starting with 13ab3de3d22799488895b01d04fac36f5b9da2ddb5cd8acff648fd53826c28b3 not found: ID does not exist" containerID="13ab3de3d22799488895b01d04fac36f5b9da2ddb5cd8acff648fd53826c28b3" Jan 26 17:24:15 crc kubenswrapper[4856]: I0126 17:24:15.885089 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13ab3de3d22799488895b01d04fac36f5b9da2ddb5cd8acff648fd53826c28b3"} err="failed to get container status \"13ab3de3d22799488895b01d04fac36f5b9da2ddb5cd8acff648fd53826c28b3\": rpc error: code = NotFound desc = could not find container \"13ab3de3d22799488895b01d04fac36f5b9da2ddb5cd8acff648fd53826c28b3\": container with ID starting with 13ab3de3d22799488895b01d04fac36f5b9da2ddb5cd8acff648fd53826c28b3 not found: ID does not exist" Jan 26 17:24:15 crc kubenswrapper[4856]: I0126 17:24:15.885121 4856 scope.go:117] "RemoveContainer" containerID="4c9c83f945a435941423e915060ee89b3979fe0baafd5195d79d8cf074728770" Jan 26 17:24:15 crc kubenswrapper[4856]: E0126 17:24:15.887037 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c9c83f945a435941423e915060ee89b3979fe0baafd5195d79d8cf074728770\": container with ID starting with 4c9c83f945a435941423e915060ee89b3979fe0baafd5195d79d8cf074728770 not found: ID does not exist" containerID="4c9c83f945a435941423e915060ee89b3979fe0baafd5195d79d8cf074728770" Jan 26 17:24:15 crc kubenswrapper[4856]: I0126 17:24:15.887095 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c9c83f945a435941423e915060ee89b3979fe0baafd5195d79d8cf074728770"} err="failed to get container status \"4c9c83f945a435941423e915060ee89b3979fe0baafd5195d79d8cf074728770\": rpc error: code = NotFound desc = could not find container \"4c9c83f945a435941423e915060ee89b3979fe0baafd5195d79d8cf074728770\": container with ID starting with 4c9c83f945a435941423e915060ee89b3979fe0baafd5195d79d8cf074728770 not found: ID does not exist" Jan 26 17:24:15 crc kubenswrapper[4856]: I0126 17:24:15.887149 4856 scope.go:117] "RemoveContainer" containerID="65ff28a459f4ead12ab9875ba11141cd0dd1d47926b020bafc7cd061527be0b4" Jan 26 17:24:15 crc kubenswrapper[4856]: E0126 17:24:15.888935 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65ff28a459f4ead12ab9875ba11141cd0dd1d47926b020bafc7cd061527be0b4\": container with ID starting with 65ff28a459f4ead12ab9875ba11141cd0dd1d47926b020bafc7cd061527be0b4 not found: ID does not exist" containerID="65ff28a459f4ead12ab9875ba11141cd0dd1d47926b020bafc7cd061527be0b4" Jan 26 17:24:15 crc kubenswrapper[4856]: I0126 17:24:15.889030 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65ff28a459f4ead12ab9875ba11141cd0dd1d47926b020bafc7cd061527be0b4"} err="failed to get container status \"65ff28a459f4ead12ab9875ba11141cd0dd1d47926b020bafc7cd061527be0b4\": rpc error: code = NotFound desc = could not find container \"65ff28a459f4ead12ab9875ba11141cd0dd1d47926b020bafc7cd061527be0b4\": container with ID starting with 65ff28a459f4ead12ab9875ba11141cd0dd1d47926b020bafc7cd061527be0b4 not found: ID does not exist" Jan 26 17:24:15 crc kubenswrapper[4856]: I0126 17:24:15.901793 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07f01f78-5312-4b86-8c59-2c02f054f99d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07f01f78-5312-4b86-8c59-2c02f054f99d" (UID: "07f01f78-5312-4b86-8c59-2c02f054f99d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:24:15 crc kubenswrapper[4856]: I0126 17:24:15.931990 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07f01f78-5312-4b86-8c59-2c02f054f99d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:24:16 crc kubenswrapper[4856]: I0126 17:24:16.127178 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hpsnz"] Jan 26 17:24:16 crc kubenswrapper[4856]: I0126 17:24:16.135669 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hpsnz"] Jan 26 17:24:16 crc kubenswrapper[4856]: I0126 17:24:16.797372 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-3-build_c15c4956-0479-4646-86c9-ca4a7ce31a28/docker-build/0.log" Jan 26 17:24:16 crc kubenswrapper[4856]: I0126 17:24:16.799666 4856 generic.go:334] "Generic (PLEG): container finished" podID="c15c4956-0479-4646-86c9-ca4a7ce31a28" containerID="3ecfe1e65b08426eb861eac05b29172d1abeeaf5b3a7ab21586bd8ce360d51d3" exitCode=1 Jan 26 17:24:16 crc kubenswrapper[4856]: I0126 17:24:16.799799 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-3-build" event={"ID":"c15c4956-0479-4646-86c9-ca4a7ce31a28","Type":"ContainerDied","Data":"3ecfe1e65b08426eb861eac05b29172d1abeeaf5b3a7ab21586bd8ce360d51d3"} Jan 26 17:24:17 crc kubenswrapper[4856]: I0126 17:24:17.406128 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07f01f78-5312-4b86-8c59-2c02f054f99d" path="/var/lib/kubelet/pods/07f01f78-5312-4b86-8c59-2c02f054f99d/volumes" Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.115398 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-3-build_c15c4956-0479-4646-86c9-ca4a7ce31a28/docker-build/0.log" Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.116939 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-3-build" Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.168267 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c15c4956-0479-4646-86c9-ca4a7ce31a28-buildcachedir\") pod \"c15c4956-0479-4646-86c9-ca4a7ce31a28\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.168389 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c15c4956-0479-4646-86c9-ca4a7ce31a28-buildworkdir\") pod \"c15c4956-0479-4646-86c9-ca4a7ce31a28\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.168426 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c15c4956-0479-4646-86c9-ca4a7ce31a28-build-blob-cache\") pod \"c15c4956-0479-4646-86c9-ca4a7ce31a28\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.168454 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/c15c4956-0479-4646-86c9-ca4a7ce31a28-builder-dockercfg-8h4xs-push\") pod \"c15c4956-0479-4646-86c9-ca4a7ce31a28\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.168487 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c15c4956-0479-4646-86c9-ca4a7ce31a28-build-proxy-ca-bundles\") pod \"c15c4956-0479-4646-86c9-ca4a7ce31a28\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.168503 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s28tl\" (UniqueName: \"kubernetes.io/projected/c15c4956-0479-4646-86c9-ca4a7ce31a28-kube-api-access-s28tl\") pod \"c15c4956-0479-4646-86c9-ca4a7ce31a28\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.168524 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c15c4956-0479-4646-86c9-ca4a7ce31a28-build-ca-bundles\") pod \"c15c4956-0479-4646-86c9-ca4a7ce31a28\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.168556 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/c15c4956-0479-4646-86c9-ca4a7ce31a28-builder-dockercfg-8h4xs-pull\") pod \"c15c4956-0479-4646-86c9-ca4a7ce31a28\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.168589 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c15c4956-0479-4646-86c9-ca4a7ce31a28-container-storage-root\") pod \"c15c4956-0479-4646-86c9-ca4a7ce31a28\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.168623 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c15c4956-0479-4646-86c9-ca4a7ce31a28-node-pullsecrets\") pod \"c15c4956-0479-4646-86c9-ca4a7ce31a28\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.168669 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c15c4956-0479-4646-86c9-ca4a7ce31a28-container-storage-run\") pod \"c15c4956-0479-4646-86c9-ca4a7ce31a28\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.168781 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c15c4956-0479-4646-86c9-ca4a7ce31a28-build-system-configs\") pod \"c15c4956-0479-4646-86c9-ca4a7ce31a28\" (UID: \"c15c4956-0479-4646-86c9-ca4a7ce31a28\") " Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.168835 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c15c4956-0479-4646-86c9-ca4a7ce31a28-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "c15c4956-0479-4646-86c9-ca4a7ce31a28" (UID: "c15c4956-0479-4646-86c9-ca4a7ce31a28"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.169903 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c15c4956-0479-4646-86c9-ca4a7ce31a28-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "c15c4956-0479-4646-86c9-ca4a7ce31a28" (UID: "c15c4956-0479-4646-86c9-ca4a7ce31a28"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.170818 4856 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c15c4956-0479-4646-86c9-ca4a7ce31a28-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.170847 4856 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c15c4956-0479-4646-86c9-ca4a7ce31a28-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.171180 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c15c4956-0479-4646-86c9-ca4a7ce31a28-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "c15c4956-0479-4646-86c9-ca4a7ce31a28" (UID: "c15c4956-0479-4646-86c9-ca4a7ce31a28"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.171799 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c15c4956-0479-4646-86c9-ca4a7ce31a28-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "c15c4956-0479-4646-86c9-ca4a7ce31a28" (UID: "c15c4956-0479-4646-86c9-ca4a7ce31a28"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.171802 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c15c4956-0479-4646-86c9-ca4a7ce31a28-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "c15c4956-0479-4646-86c9-ca4a7ce31a28" (UID: "c15c4956-0479-4646-86c9-ca4a7ce31a28"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.171980 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c15c4956-0479-4646-86c9-ca4a7ce31a28-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "c15c4956-0479-4646-86c9-ca4a7ce31a28" (UID: "c15c4956-0479-4646-86c9-ca4a7ce31a28"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.176378 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c15c4956-0479-4646-86c9-ca4a7ce31a28-kube-api-access-s28tl" (OuterVolumeSpecName: "kube-api-access-s28tl") pod "c15c4956-0479-4646-86c9-ca4a7ce31a28" (UID: "c15c4956-0479-4646-86c9-ca4a7ce31a28"). InnerVolumeSpecName "kube-api-access-s28tl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.177113 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c15c4956-0479-4646-86c9-ca4a7ce31a28-builder-dockercfg-8h4xs-push" (OuterVolumeSpecName: "builder-dockercfg-8h4xs-push") pod "c15c4956-0479-4646-86c9-ca4a7ce31a28" (UID: "c15c4956-0479-4646-86c9-ca4a7ce31a28"). InnerVolumeSpecName "builder-dockercfg-8h4xs-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.177767 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c15c4956-0479-4646-86c9-ca4a7ce31a28-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "c15c4956-0479-4646-86c9-ca4a7ce31a28" (UID: "c15c4956-0479-4646-86c9-ca4a7ce31a28"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.178719 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c15c4956-0479-4646-86c9-ca4a7ce31a28-builder-dockercfg-8h4xs-pull" (OuterVolumeSpecName: "builder-dockercfg-8h4xs-pull") pod "c15c4956-0479-4646-86c9-ca4a7ce31a28" (UID: "c15c4956-0479-4646-86c9-ca4a7ce31a28"). InnerVolumeSpecName "builder-dockercfg-8h4xs-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.259727 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c15c4956-0479-4646-86c9-ca4a7ce31a28-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "c15c4956-0479-4646-86c9-ca4a7ce31a28" (UID: "c15c4956-0479-4646-86c9-ca4a7ce31a28"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.272256 4856 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c15c4956-0479-4646-86c9-ca4a7ce31a28-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.272292 4856 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c15c4956-0479-4646-86c9-ca4a7ce31a28-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.272302 4856 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c15c4956-0479-4646-86c9-ca4a7ce31a28-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.272312 4856 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c15c4956-0479-4646-86c9-ca4a7ce31a28-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.272321 4856 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/c15c4956-0479-4646-86c9-ca4a7ce31a28-builder-dockercfg-8h4xs-push\") on node \"crc\" DevicePath \"\"" Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.272330 4856 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c15c4956-0479-4646-86c9-ca4a7ce31a28-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.272338 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s28tl\" (UniqueName: \"kubernetes.io/projected/c15c4956-0479-4646-86c9-ca4a7ce31a28-kube-api-access-s28tl\") on node \"crc\" DevicePath \"\"" Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.272347 4856 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/c15c4956-0479-4646-86c9-ca4a7ce31a28-builder-dockercfg-8h4xs-pull\") on node \"crc\" DevicePath \"\"" Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.272356 4856 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c15c4956-0479-4646-86c9-ca4a7ce31a28-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.827673 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-3-build_c15c4956-0479-4646-86c9-ca4a7ce31a28/docker-build/0.log" Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.830550 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-3-build" event={"ID":"c15c4956-0479-4646-86c9-ca4a7ce31a28","Type":"ContainerDied","Data":"04562fdb1bb25c90ae0571bf68dcbe10145ddf06b7e4caf1d7fb642924f80caf"} Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.830615 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04562fdb1bb25c90ae0571bf68dcbe10145ddf06b7e4caf1d7fb642924f80caf" Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.830751 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-3-build" Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.853794 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c15c4956-0479-4646-86c9-ca4a7ce31a28-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "c15c4956-0479-4646-86c9-ca4a7ce31a28" (UID: "c15c4956-0479-4646-86c9-ca4a7ce31a28"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:24:18 crc kubenswrapper[4856]: I0126 17:24:18.890294 4856 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c15c4956-0479-4646-86c9-ca4a7ce31a28-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.143868 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8ttkg"] Jan 26 17:24:28 crc kubenswrapper[4856]: E0126 17:24:28.144837 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c15c4956-0479-4646-86c9-ca4a7ce31a28" containerName="docker-build" Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.144857 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="c15c4956-0479-4646-86c9-ca4a7ce31a28" containerName="docker-build" Jan 26 17:24:28 crc kubenswrapper[4856]: E0126 17:24:28.144878 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07f01f78-5312-4b86-8c59-2c02f054f99d" containerName="registry-server" Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.144888 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f01f78-5312-4b86-8c59-2c02f054f99d" containerName="registry-server" Jan 26 17:24:28 crc kubenswrapper[4856]: E0126 17:24:28.144906 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c15c4956-0479-4646-86c9-ca4a7ce31a28" containerName="manage-dockerfile" Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.144914 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="c15c4956-0479-4646-86c9-ca4a7ce31a28" containerName="manage-dockerfile" Jan 26 17:24:28 crc kubenswrapper[4856]: E0126 17:24:28.144930 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07f01f78-5312-4b86-8c59-2c02f054f99d" containerName="extract-utilities" Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.144937 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f01f78-5312-4b86-8c59-2c02f054f99d" containerName="extract-utilities" Jan 26 17:24:28 crc kubenswrapper[4856]: E0126 17:24:28.144950 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c15c4956-0479-4646-86c9-ca4a7ce31a28" containerName="git-clone" Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.144960 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="c15c4956-0479-4646-86c9-ca4a7ce31a28" containerName="git-clone" Jan 26 17:24:28 crc kubenswrapper[4856]: E0126 17:24:28.144985 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07f01f78-5312-4b86-8c59-2c02f054f99d" containerName="extract-content" Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.144995 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f01f78-5312-4b86-8c59-2c02f054f99d" containerName="extract-content" Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.145127 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="07f01f78-5312-4b86-8c59-2c02f054f99d" containerName="registry-server" Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.145141 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="c15c4956-0479-4646-86c9-ca4a7ce31a28" containerName="docker-build" Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.146220 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8ttkg" Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.156298 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8ttkg"] Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.236769 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bddzx\" (UniqueName: \"kubernetes.io/projected/5167c3a3-ec3c-4f30-9410-ebc60d61f515-kube-api-access-bddzx\") pod \"community-operators-8ttkg\" (UID: \"5167c3a3-ec3c-4f30-9410-ebc60d61f515\") " pod="openshift-marketplace/community-operators-8ttkg" Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.236845 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5167c3a3-ec3c-4f30-9410-ebc60d61f515-catalog-content\") pod \"community-operators-8ttkg\" (UID: \"5167c3a3-ec3c-4f30-9410-ebc60d61f515\") " pod="openshift-marketplace/community-operators-8ttkg" Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.236893 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5167c3a3-ec3c-4f30-9410-ebc60d61f515-utilities\") pod \"community-operators-8ttkg\" (UID: \"5167c3a3-ec3c-4f30-9410-ebc60d61f515\") " pod="openshift-marketplace/community-operators-8ttkg" Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.338037 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bddzx\" (UniqueName: \"kubernetes.io/projected/5167c3a3-ec3c-4f30-9410-ebc60d61f515-kube-api-access-bddzx\") pod \"community-operators-8ttkg\" (UID: \"5167c3a3-ec3c-4f30-9410-ebc60d61f515\") " pod="openshift-marketplace/community-operators-8ttkg" Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.338112 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5167c3a3-ec3c-4f30-9410-ebc60d61f515-catalog-content\") pod \"community-operators-8ttkg\" (UID: \"5167c3a3-ec3c-4f30-9410-ebc60d61f515\") " pod="openshift-marketplace/community-operators-8ttkg" Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.338171 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5167c3a3-ec3c-4f30-9410-ebc60d61f515-utilities\") pod \"community-operators-8ttkg\" (UID: \"5167c3a3-ec3c-4f30-9410-ebc60d61f515\") " pod="openshift-marketplace/community-operators-8ttkg" Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.338746 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5167c3a3-ec3c-4f30-9410-ebc60d61f515-catalog-content\") pod \"community-operators-8ttkg\" (UID: \"5167c3a3-ec3c-4f30-9410-ebc60d61f515\") " pod="openshift-marketplace/community-operators-8ttkg" Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.338903 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5167c3a3-ec3c-4f30-9410-ebc60d61f515-utilities\") pod \"community-operators-8ttkg\" (UID: \"5167c3a3-ec3c-4f30-9410-ebc60d61f515\") " pod="openshift-marketplace/community-operators-8ttkg" Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.367981 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bddzx\" (UniqueName: \"kubernetes.io/projected/5167c3a3-ec3c-4f30-9410-ebc60d61f515-kube-api-access-bddzx\") pod \"community-operators-8ttkg\" (UID: \"5167c3a3-ec3c-4f30-9410-ebc60d61f515\") " pod="openshift-marketplace/community-operators-8ttkg" Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.465327 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8ttkg" Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.742087 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8ttkg"] Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.855422 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-4-build"] Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.872893 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-4-build" Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.878480 4856 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-8h4xs" Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.878733 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-4-sys-config" Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.878913 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-4-ca" Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.878969 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-4-global-ca" Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.889266 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-4-build"] Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.913355 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ttkg" event={"ID":"5167c3a3-ec3c-4f30-9410-ebc60d61f515","Type":"ContainerStarted","Data":"d32dd0fa9bc14484fc546366cd6dea290074e7531292759b6e03e195de0cc5ef"} Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.945361 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9fd7ae61-20c0-41bb-93e6-f209748133ef-container-storage-run\") pod \"prometheus-webhook-snmp-4-build\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " pod="service-telemetry/prometheus-webhook-snmp-4-build" Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.945432 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cq27\" (UniqueName: \"kubernetes.io/projected/9fd7ae61-20c0-41bb-93e6-f209748133ef-kube-api-access-8cq27\") pod \"prometheus-webhook-snmp-4-build\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " pod="service-telemetry/prometheus-webhook-snmp-4-build" Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.945457 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9fd7ae61-20c0-41bb-93e6-f209748133ef-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-4-build\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " pod="service-telemetry/prometheus-webhook-snmp-4-build" Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.945474 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9fd7ae61-20c0-41bb-93e6-f209748133ef-container-storage-root\") pod \"prometheus-webhook-snmp-4-build\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " pod="service-telemetry/prometheus-webhook-snmp-4-build" Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.945492 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9fd7ae61-20c0-41bb-93e6-f209748133ef-buildworkdir\") pod \"prometheus-webhook-snmp-4-build\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " pod="service-telemetry/prometheus-webhook-snmp-4-build" Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.945510 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9fd7ae61-20c0-41bb-93e6-f209748133ef-build-blob-cache\") pod \"prometheus-webhook-snmp-4-build\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " pod="service-telemetry/prometheus-webhook-snmp-4-build" Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.945542 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9fd7ae61-20c0-41bb-93e6-f209748133ef-build-system-configs\") pod \"prometheus-webhook-snmp-4-build\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " pod="service-telemetry/prometheus-webhook-snmp-4-build" Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.945569 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/9fd7ae61-20c0-41bb-93e6-f209748133ef-builder-dockercfg-8h4xs-pull\") pod \"prometheus-webhook-snmp-4-build\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " pod="service-telemetry/prometheus-webhook-snmp-4-build" Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.945588 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9fd7ae61-20c0-41bb-93e6-f209748133ef-build-ca-bundles\") pod \"prometheus-webhook-snmp-4-build\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " pod="service-telemetry/prometheus-webhook-snmp-4-build" Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.945625 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9fd7ae61-20c0-41bb-93e6-f209748133ef-node-pullsecrets\") pod \"prometheus-webhook-snmp-4-build\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " pod="service-telemetry/prometheus-webhook-snmp-4-build" Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.945645 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/9fd7ae61-20c0-41bb-93e6-f209748133ef-builder-dockercfg-8h4xs-push\") pod \"prometheus-webhook-snmp-4-build\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " pod="service-telemetry/prometheus-webhook-snmp-4-build" Jan 26 17:24:28 crc kubenswrapper[4856]: I0126 17:24:28.945660 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9fd7ae61-20c0-41bb-93e6-f209748133ef-buildcachedir\") pod \"prometheus-webhook-snmp-4-build\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " pod="service-telemetry/prometheus-webhook-snmp-4-build" Jan 26 17:24:29 crc kubenswrapper[4856]: I0126 17:24:29.046348 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9fd7ae61-20c0-41bb-93e6-f209748133ef-buildcachedir\") pod \"prometheus-webhook-snmp-4-build\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " pod="service-telemetry/prometheus-webhook-snmp-4-build" Jan 26 17:24:29 crc kubenswrapper[4856]: I0126 17:24:29.046722 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/9fd7ae61-20c0-41bb-93e6-f209748133ef-builder-dockercfg-8h4xs-push\") pod \"prometheus-webhook-snmp-4-build\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " pod="service-telemetry/prometheus-webhook-snmp-4-build" Jan 26 17:24:29 crc kubenswrapper[4856]: I0126 17:24:29.046501 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9fd7ae61-20c0-41bb-93e6-f209748133ef-buildcachedir\") pod \"prometheus-webhook-snmp-4-build\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " pod="service-telemetry/prometheus-webhook-snmp-4-build" Jan 26 17:24:29 crc kubenswrapper[4856]: I0126 17:24:29.046755 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9fd7ae61-20c0-41bb-93e6-f209748133ef-container-storage-run\") pod \"prometheus-webhook-snmp-4-build\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " pod="service-telemetry/prometheus-webhook-snmp-4-build" Jan 26 17:24:29 crc kubenswrapper[4856]: I0126 17:24:29.046901 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cq27\" (UniqueName: \"kubernetes.io/projected/9fd7ae61-20c0-41bb-93e6-f209748133ef-kube-api-access-8cq27\") pod \"prometheus-webhook-snmp-4-build\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " pod="service-telemetry/prometheus-webhook-snmp-4-build" Jan 26 17:24:29 crc kubenswrapper[4856]: I0126 17:24:29.046950 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9fd7ae61-20c0-41bb-93e6-f209748133ef-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-4-build\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " pod="service-telemetry/prometheus-webhook-snmp-4-build" Jan 26 17:24:29 crc kubenswrapper[4856]: I0126 17:24:29.046972 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9fd7ae61-20c0-41bb-93e6-f209748133ef-container-storage-root\") pod \"prometheus-webhook-snmp-4-build\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " pod="service-telemetry/prometheus-webhook-snmp-4-build" Jan 26 17:24:29 crc kubenswrapper[4856]: I0126 17:24:29.046997 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9fd7ae61-20c0-41bb-93e6-f209748133ef-buildworkdir\") pod \"prometheus-webhook-snmp-4-build\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " pod="service-telemetry/prometheus-webhook-snmp-4-build" Jan 26 17:24:29 crc kubenswrapper[4856]: I0126 17:24:29.047030 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9fd7ae61-20c0-41bb-93e6-f209748133ef-build-blob-cache\") pod \"prometheus-webhook-snmp-4-build\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " pod="service-telemetry/prometheus-webhook-snmp-4-build" Jan 26 17:24:29 crc kubenswrapper[4856]: I0126 17:24:29.047058 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9fd7ae61-20c0-41bb-93e6-f209748133ef-build-system-configs\") pod \"prometheus-webhook-snmp-4-build\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " pod="service-telemetry/prometheus-webhook-snmp-4-build" Jan 26 17:24:29 crc kubenswrapper[4856]: I0126 17:24:29.047101 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/9fd7ae61-20c0-41bb-93e6-f209748133ef-builder-dockercfg-8h4xs-pull\") pod \"prometheus-webhook-snmp-4-build\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " pod="service-telemetry/prometheus-webhook-snmp-4-build" Jan 26 17:24:29 crc kubenswrapper[4856]: I0126 17:24:29.047120 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9fd7ae61-20c0-41bb-93e6-f209748133ef-container-storage-run\") pod \"prometheus-webhook-snmp-4-build\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " pod="service-telemetry/prometheus-webhook-snmp-4-build" Jan 26 17:24:29 crc kubenswrapper[4856]: I0126 17:24:29.047124 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9fd7ae61-20c0-41bb-93e6-f209748133ef-build-ca-bundles\") pod \"prometheus-webhook-snmp-4-build\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " pod="service-telemetry/prometheus-webhook-snmp-4-build" Jan 26 17:24:29 crc kubenswrapper[4856]: I0126 17:24:29.047188 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9fd7ae61-20c0-41bb-93e6-f209748133ef-node-pullsecrets\") pod \"prometheus-webhook-snmp-4-build\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " pod="service-telemetry/prometheus-webhook-snmp-4-build" Jan 26 17:24:29 crc kubenswrapper[4856]: I0126 17:24:29.047249 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9fd7ae61-20c0-41bb-93e6-f209748133ef-node-pullsecrets\") pod \"prometheus-webhook-snmp-4-build\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " pod="service-telemetry/prometheus-webhook-snmp-4-build" Jan 26 17:24:29 crc kubenswrapper[4856]: I0126 17:24:29.048000 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9fd7ae61-20c0-41bb-93e6-f209748133ef-buildworkdir\") pod \"prometheus-webhook-snmp-4-build\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " pod="service-telemetry/prometheus-webhook-snmp-4-build" Jan 26 17:24:29 crc kubenswrapper[4856]: I0126 17:24:29.048244 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9fd7ae61-20c0-41bb-93e6-f209748133ef-build-system-configs\") pod \"prometheus-webhook-snmp-4-build\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " pod="service-telemetry/prometheus-webhook-snmp-4-build" Jan 26 17:24:29 crc kubenswrapper[4856]: I0126 17:24:29.048281 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9fd7ae61-20c0-41bb-93e6-f209748133ef-build-ca-bundles\") pod \"prometheus-webhook-snmp-4-build\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " pod="service-telemetry/prometheus-webhook-snmp-4-build" Jan 26 17:24:29 crc kubenswrapper[4856]: I0126 17:24:29.048383 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9fd7ae61-20c0-41bb-93e6-f209748133ef-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-4-build\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " pod="service-telemetry/prometheus-webhook-snmp-4-build" Jan 26 17:24:29 crc kubenswrapper[4856]: I0126 17:24:29.048453 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9fd7ae61-20c0-41bb-93e6-f209748133ef-build-blob-cache\") pod \"prometheus-webhook-snmp-4-build\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " pod="service-telemetry/prometheus-webhook-snmp-4-build" Jan 26 17:24:29 crc kubenswrapper[4856]: I0126 17:24:29.048555 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9fd7ae61-20c0-41bb-93e6-f209748133ef-container-storage-root\") pod \"prometheus-webhook-snmp-4-build\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " pod="service-telemetry/prometheus-webhook-snmp-4-build" Jan 26 17:24:29 crc kubenswrapper[4856]: I0126 17:24:29.061293 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/9fd7ae61-20c0-41bb-93e6-f209748133ef-builder-dockercfg-8h4xs-push\") pod \"prometheus-webhook-snmp-4-build\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " pod="service-telemetry/prometheus-webhook-snmp-4-build" Jan 26 17:24:29 crc kubenswrapper[4856]: I0126 17:24:29.062620 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/9fd7ae61-20c0-41bb-93e6-f209748133ef-builder-dockercfg-8h4xs-pull\") pod \"prometheus-webhook-snmp-4-build\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " pod="service-telemetry/prometheus-webhook-snmp-4-build" Jan 26 17:24:29 crc kubenswrapper[4856]: I0126 17:24:29.066770 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cq27\" (UniqueName: \"kubernetes.io/projected/9fd7ae61-20c0-41bb-93e6-f209748133ef-kube-api-access-8cq27\") pod \"prometheus-webhook-snmp-4-build\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " pod="service-telemetry/prometheus-webhook-snmp-4-build" Jan 26 17:24:29 crc kubenswrapper[4856]: I0126 17:24:29.191650 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-4-build" Jan 26 17:24:29 crc kubenswrapper[4856]: I0126 17:24:29.387757 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-4-build"] Jan 26 17:24:29 crc kubenswrapper[4856]: I0126 17:24:29.923413 4856 generic.go:334] "Generic (PLEG): container finished" podID="5167c3a3-ec3c-4f30-9410-ebc60d61f515" containerID="ea789d382e8ee7680a387d4c0ac031fe25c48fa1835d8928284e77f51f936f6a" exitCode=0 Jan 26 17:24:29 crc kubenswrapper[4856]: I0126 17:24:29.923490 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ttkg" event={"ID":"5167c3a3-ec3c-4f30-9410-ebc60d61f515","Type":"ContainerDied","Data":"ea789d382e8ee7680a387d4c0ac031fe25c48fa1835d8928284e77f51f936f6a"} Jan 26 17:24:29 crc kubenswrapper[4856]: I0126 17:24:29.926405 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-4-build" event={"ID":"9fd7ae61-20c0-41bb-93e6-f209748133ef","Type":"ContainerStarted","Data":"63ec08bc14f1b2ac4e6ce7fff75e97dfd901f66d0eb16464110e649714bdfefb"} Jan 26 17:24:29 crc kubenswrapper[4856]: I0126 17:24:29.926494 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-4-build" event={"ID":"9fd7ae61-20c0-41bb-93e6-f209748133ef","Type":"ContainerStarted","Data":"1114ab5ebbed8464e1df1efeb4253f6b4ab7782e2e7ad6231ca36374142b6d8b"} Jan 26 17:24:30 crc kubenswrapper[4856]: E0126 17:24:30.051441 4856 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.241:46256->38.102.83.241:41827: write tcp 38.102.83.241:46256->38.102.83.241:41827: write: broken pipe Jan 26 17:24:30 crc kubenswrapper[4856]: I0126 17:24:30.935370 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ttkg" event={"ID":"5167c3a3-ec3c-4f30-9410-ebc60d61f515","Type":"ContainerStarted","Data":"26c1821c4f23578ac6a0c1415d6730caff4fddaf6502ee746d86ad133657cf42"} Jan 26 17:24:30 crc kubenswrapper[4856]: I0126 17:24:30.938640 4856 generic.go:334] "Generic (PLEG): container finished" podID="9fd7ae61-20c0-41bb-93e6-f209748133ef" containerID="63ec08bc14f1b2ac4e6ce7fff75e97dfd901f66d0eb16464110e649714bdfefb" exitCode=0 Jan 26 17:24:30 crc kubenswrapper[4856]: I0126 17:24:30.938703 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-4-build" event={"ID":"9fd7ae61-20c0-41bb-93e6-f209748133ef","Type":"ContainerDied","Data":"63ec08bc14f1b2ac4e6ce7fff75e97dfd901f66d0eb16464110e649714bdfefb"} Jan 26 17:24:31 crc kubenswrapper[4856]: I0126 17:24:31.946373 4856 generic.go:334] "Generic (PLEG): container finished" podID="5167c3a3-ec3c-4f30-9410-ebc60d61f515" containerID="26c1821c4f23578ac6a0c1415d6730caff4fddaf6502ee746d86ad133657cf42" exitCode=0 Jan 26 17:24:31 crc kubenswrapper[4856]: I0126 17:24:31.946436 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ttkg" event={"ID":"5167c3a3-ec3c-4f30-9410-ebc60d61f515","Type":"ContainerDied","Data":"26c1821c4f23578ac6a0c1415d6730caff4fddaf6502ee746d86ad133657cf42"} Jan 26 17:24:31 crc kubenswrapper[4856]: I0126 17:24:31.948629 4856 generic.go:334] "Generic (PLEG): container finished" podID="9fd7ae61-20c0-41bb-93e6-f209748133ef" containerID="0af3b5710b654d0464312b2ab2879d5bcc53b56905655509964916d5b57c7f23" exitCode=0 Jan 26 17:24:31 crc kubenswrapper[4856]: I0126 17:24:31.948670 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-4-build" event={"ID":"9fd7ae61-20c0-41bb-93e6-f209748133ef","Type":"ContainerDied","Data":"0af3b5710b654d0464312b2ab2879d5bcc53b56905655509964916d5b57c7f23"} Jan 26 17:24:32 crc kubenswrapper[4856]: I0126 17:24:32.005386 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-4-build_9fd7ae61-20c0-41bb-93e6-f209748133ef/manage-dockerfile/0.log" Jan 26 17:24:32 crc kubenswrapper[4856]: I0126 17:24:32.956840 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-4-build" event={"ID":"9fd7ae61-20c0-41bb-93e6-f209748133ef","Type":"ContainerStarted","Data":"02dea6bff540ea6fdb0a80d31de9614f99f56cd175ee20e708f9fb1b894208a9"} Jan 26 17:24:32 crc kubenswrapper[4856]: I0126 17:24:32.988208 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-4-build" podStartSLOduration=4.988183756 podStartE2EDuration="4.988183756s" podCreationTimestamp="2026-01-26 17:24:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:24:32.982620857 +0000 UTC m=+1568.935874848" watchObservedRunningTime="2026-01-26 17:24:32.988183756 +0000 UTC m=+1568.941437737" Jan 26 17:24:33 crc kubenswrapper[4856]: I0126 17:24:33.966046 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ttkg" event={"ID":"5167c3a3-ec3c-4f30-9410-ebc60d61f515","Type":"ContainerStarted","Data":"bc8203a7d47ee43722cf1047b5eda9c60a27c27e6ee323351ad4ef557fd2f359"} Jan 26 17:24:33 crc kubenswrapper[4856]: I0126 17:24:33.993128 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8ttkg" podStartSLOduration=2.202157824 podStartE2EDuration="5.993092323s" podCreationTimestamp="2026-01-26 17:24:28 +0000 UTC" firstStartedPulling="2026-01-26 17:24:29.925724982 +0000 UTC m=+1565.878978983" lastFinishedPulling="2026-01-26 17:24:33.716659501 +0000 UTC m=+1569.669913482" observedRunningTime="2026-01-26 17:24:33.986826944 +0000 UTC m=+1569.940080945" watchObservedRunningTime="2026-01-26 17:24:33.993092323 +0000 UTC m=+1569.946346354" Jan 26 17:24:38 crc kubenswrapper[4856]: I0126 17:24:38.466569 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8ttkg" Jan 26 17:24:38 crc kubenswrapper[4856]: I0126 17:24:38.467073 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8ttkg" Jan 26 17:24:38 crc kubenswrapper[4856]: I0126 17:24:38.524489 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8ttkg" Jan 26 17:24:39 crc kubenswrapper[4856]: I0126 17:24:39.038001 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8ttkg" Jan 26 17:24:39 crc kubenswrapper[4856]: I0126 17:24:39.096842 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8ttkg"] Jan 26 17:24:41 crc kubenswrapper[4856]: I0126 17:24:41.057984 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8ttkg" podUID="5167c3a3-ec3c-4f30-9410-ebc60d61f515" containerName="registry-server" containerID="cri-o://bc8203a7d47ee43722cf1047b5eda9c60a27c27e6ee323351ad4ef557fd2f359" gracePeriod=2 Jan 26 17:24:44 crc kubenswrapper[4856]: I0126 17:24:44.086723 4856 generic.go:334] "Generic (PLEG): container finished" podID="5167c3a3-ec3c-4f30-9410-ebc60d61f515" containerID="bc8203a7d47ee43722cf1047b5eda9c60a27c27e6ee323351ad4ef557fd2f359" exitCode=0 Jan 26 17:24:44 crc kubenswrapper[4856]: I0126 17:24:44.086784 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ttkg" event={"ID":"5167c3a3-ec3c-4f30-9410-ebc60d61f515","Type":"ContainerDied","Data":"bc8203a7d47ee43722cf1047b5eda9c60a27c27e6ee323351ad4ef557fd2f359"} Jan 26 17:24:44 crc kubenswrapper[4856]: I0126 17:24:44.087207 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ttkg" event={"ID":"5167c3a3-ec3c-4f30-9410-ebc60d61f515","Type":"ContainerDied","Data":"d32dd0fa9bc14484fc546366cd6dea290074e7531292759b6e03e195de0cc5ef"} Jan 26 17:24:44 crc kubenswrapper[4856]: I0126 17:24:44.087229 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d32dd0fa9bc14484fc546366cd6dea290074e7531292759b6e03e195de0cc5ef" Jan 26 17:24:44 crc kubenswrapper[4856]: I0126 17:24:44.098656 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8ttkg" Jan 26 17:24:44 crc kubenswrapper[4856]: I0126 17:24:44.208873 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5167c3a3-ec3c-4f30-9410-ebc60d61f515-catalog-content\") pod \"5167c3a3-ec3c-4f30-9410-ebc60d61f515\" (UID: \"5167c3a3-ec3c-4f30-9410-ebc60d61f515\") " Jan 26 17:24:44 crc kubenswrapper[4856]: I0126 17:24:44.209107 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5167c3a3-ec3c-4f30-9410-ebc60d61f515-utilities\") pod \"5167c3a3-ec3c-4f30-9410-ebc60d61f515\" (UID: \"5167c3a3-ec3c-4f30-9410-ebc60d61f515\") " Jan 26 17:24:44 crc kubenswrapper[4856]: I0126 17:24:44.209197 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bddzx\" (UniqueName: \"kubernetes.io/projected/5167c3a3-ec3c-4f30-9410-ebc60d61f515-kube-api-access-bddzx\") pod \"5167c3a3-ec3c-4f30-9410-ebc60d61f515\" (UID: \"5167c3a3-ec3c-4f30-9410-ebc60d61f515\") " Jan 26 17:24:44 crc kubenswrapper[4856]: I0126 17:24:44.209928 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5167c3a3-ec3c-4f30-9410-ebc60d61f515-utilities" (OuterVolumeSpecName: "utilities") pod "5167c3a3-ec3c-4f30-9410-ebc60d61f515" (UID: "5167c3a3-ec3c-4f30-9410-ebc60d61f515"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:24:44 crc kubenswrapper[4856]: I0126 17:24:44.214998 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5167c3a3-ec3c-4f30-9410-ebc60d61f515-kube-api-access-bddzx" (OuterVolumeSpecName: "kube-api-access-bddzx") pod "5167c3a3-ec3c-4f30-9410-ebc60d61f515" (UID: "5167c3a3-ec3c-4f30-9410-ebc60d61f515"). InnerVolumeSpecName "kube-api-access-bddzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:24:44 crc kubenswrapper[4856]: I0126 17:24:44.282887 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5167c3a3-ec3c-4f30-9410-ebc60d61f515-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5167c3a3-ec3c-4f30-9410-ebc60d61f515" (UID: "5167c3a3-ec3c-4f30-9410-ebc60d61f515"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:24:44 crc kubenswrapper[4856]: I0126 17:24:44.311144 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5167c3a3-ec3c-4f30-9410-ebc60d61f515-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:24:44 crc kubenswrapper[4856]: I0126 17:24:44.311190 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bddzx\" (UniqueName: \"kubernetes.io/projected/5167c3a3-ec3c-4f30-9410-ebc60d61f515-kube-api-access-bddzx\") on node \"crc\" DevicePath \"\"" Jan 26 17:24:44 crc kubenswrapper[4856]: I0126 17:24:44.311205 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5167c3a3-ec3c-4f30-9410-ebc60d61f515-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:24:45 crc kubenswrapper[4856]: I0126 17:24:45.094298 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8ttkg" Jan 26 17:24:45 crc kubenswrapper[4856]: I0126 17:24:45.137909 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8ttkg"] Jan 26 17:24:45 crc kubenswrapper[4856]: I0126 17:24:45.142976 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8ttkg"] Jan 26 17:24:45 crc kubenswrapper[4856]: I0126 17:24:45.402902 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5167c3a3-ec3c-4f30-9410-ebc60d61f515" path="/var/lib/kubelet/pods/5167c3a3-ec3c-4f30-9410-ebc60d61f515/volumes" Jan 26 17:25:28 crc kubenswrapper[4856]: I0126 17:25:28.510993 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-4-build_9fd7ae61-20c0-41bb-93e6-f209748133ef/docker-build/0.log" Jan 26 17:25:28 crc kubenswrapper[4856]: I0126 17:25:28.512334 4856 generic.go:334] "Generic (PLEG): container finished" podID="9fd7ae61-20c0-41bb-93e6-f209748133ef" containerID="02dea6bff540ea6fdb0a80d31de9614f99f56cd175ee20e708f9fb1b894208a9" exitCode=1 Jan 26 17:25:28 crc kubenswrapper[4856]: I0126 17:25:28.512369 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-4-build" event={"ID":"9fd7ae61-20c0-41bb-93e6-f209748133ef","Type":"ContainerDied","Data":"02dea6bff540ea6fdb0a80d31de9614f99f56cd175ee20e708f9fb1b894208a9"} Jan 26 17:25:29 crc kubenswrapper[4856]: I0126 17:25:29.782450 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-4-build_9fd7ae61-20c0-41bb-93e6-f209748133ef/docker-build/0.log" Jan 26 17:25:29 crc kubenswrapper[4856]: I0126 17:25:29.785541 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-4-build" Jan 26 17:25:29 crc kubenswrapper[4856]: I0126 17:25:29.973339 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/9fd7ae61-20c0-41bb-93e6-f209748133ef-builder-dockercfg-8h4xs-push\") pod \"9fd7ae61-20c0-41bb-93e6-f209748133ef\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " Jan 26 17:25:29 crc kubenswrapper[4856]: I0126 17:25:29.973424 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9fd7ae61-20c0-41bb-93e6-f209748133ef-buildcachedir\") pod \"9fd7ae61-20c0-41bb-93e6-f209748133ef\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " Jan 26 17:25:29 crc kubenswrapper[4856]: I0126 17:25:29.973472 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9fd7ae61-20c0-41bb-93e6-f209748133ef-build-proxy-ca-bundles\") pod \"9fd7ae61-20c0-41bb-93e6-f209748133ef\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " Jan 26 17:25:29 crc kubenswrapper[4856]: I0126 17:25:29.973503 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/9fd7ae61-20c0-41bb-93e6-f209748133ef-builder-dockercfg-8h4xs-pull\") pod \"9fd7ae61-20c0-41bb-93e6-f209748133ef\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " Jan 26 17:25:29 crc kubenswrapper[4856]: I0126 17:25:29.973556 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9fd7ae61-20c0-41bb-93e6-f209748133ef-node-pullsecrets\") pod \"9fd7ae61-20c0-41bb-93e6-f209748133ef\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " Jan 26 17:25:29 crc kubenswrapper[4856]: I0126 17:25:29.973621 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cq27\" (UniqueName: \"kubernetes.io/projected/9fd7ae61-20c0-41bb-93e6-f209748133ef-kube-api-access-8cq27\") pod \"9fd7ae61-20c0-41bb-93e6-f209748133ef\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " Jan 26 17:25:29 crc kubenswrapper[4856]: I0126 17:25:29.973675 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9fd7ae61-20c0-41bb-93e6-f209748133ef-build-blob-cache\") pod \"9fd7ae61-20c0-41bb-93e6-f209748133ef\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " Jan 26 17:25:29 crc kubenswrapper[4856]: I0126 17:25:29.973703 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9fd7ae61-20c0-41bb-93e6-f209748133ef-build-ca-bundles\") pod \"9fd7ae61-20c0-41bb-93e6-f209748133ef\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " Jan 26 17:25:29 crc kubenswrapper[4856]: I0126 17:25:29.973785 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9fd7ae61-20c0-41bb-93e6-f209748133ef-container-storage-root\") pod \"9fd7ae61-20c0-41bb-93e6-f209748133ef\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " Jan 26 17:25:29 crc kubenswrapper[4856]: I0126 17:25:29.973821 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9fd7ae61-20c0-41bb-93e6-f209748133ef-build-system-configs\") pod \"9fd7ae61-20c0-41bb-93e6-f209748133ef\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " Jan 26 17:25:29 crc kubenswrapper[4856]: I0126 17:25:29.974066 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fd7ae61-20c0-41bb-93e6-f209748133ef-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "9fd7ae61-20c0-41bb-93e6-f209748133ef" (UID: "9fd7ae61-20c0-41bb-93e6-f209748133ef"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:25:29 crc kubenswrapper[4856]: I0126 17:25:29.974614 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fd7ae61-20c0-41bb-93e6-f209748133ef-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "9fd7ae61-20c0-41bb-93e6-f209748133ef" (UID: "9fd7ae61-20c0-41bb-93e6-f209748133ef"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:25:29 crc kubenswrapper[4856]: I0126 17:25:29.974953 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fd7ae61-20c0-41bb-93e6-f209748133ef-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "9fd7ae61-20c0-41bb-93e6-f209748133ef" (UID: "9fd7ae61-20c0-41bb-93e6-f209748133ef"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:25:29 crc kubenswrapper[4856]: I0126 17:25:29.975029 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9fd7ae61-20c0-41bb-93e6-f209748133ef-container-storage-run\") pod \"9fd7ae61-20c0-41bb-93e6-f209748133ef\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " Jan 26 17:25:29 crc kubenswrapper[4856]: I0126 17:25:29.975459 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9fd7ae61-20c0-41bb-93e6-f209748133ef-buildworkdir\") pod \"9fd7ae61-20c0-41bb-93e6-f209748133ef\" (UID: \"9fd7ae61-20c0-41bb-93e6-f209748133ef\") " Jan 26 17:25:29 crc kubenswrapper[4856]: I0126 17:25:29.975033 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fd7ae61-20c0-41bb-93e6-f209748133ef-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "9fd7ae61-20c0-41bb-93e6-f209748133ef" (UID: "9fd7ae61-20c0-41bb-93e6-f209748133ef"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:25:29 crc kubenswrapper[4856]: I0126 17:25:29.975361 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fd7ae61-20c0-41bb-93e6-f209748133ef-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "9fd7ae61-20c0-41bb-93e6-f209748133ef" (UID: "9fd7ae61-20c0-41bb-93e6-f209748133ef"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:25:29 crc kubenswrapper[4856]: I0126 17:25:29.975501 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fd7ae61-20c0-41bb-93e6-f209748133ef-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "9fd7ae61-20c0-41bb-93e6-f209748133ef" (UID: "9fd7ae61-20c0-41bb-93e6-f209748133ef"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:25:29 crc kubenswrapper[4856]: I0126 17:25:29.975990 4856 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9fd7ae61-20c0-41bb-93e6-f209748133ef-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 17:25:29 crc kubenswrapper[4856]: I0126 17:25:29.976033 4856 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9fd7ae61-20c0-41bb-93e6-f209748133ef-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 26 17:25:29 crc kubenswrapper[4856]: I0126 17:25:29.976054 4856 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9fd7ae61-20c0-41bb-93e6-f209748133ef-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 26 17:25:29 crc kubenswrapper[4856]: I0126 17:25:29.976072 4856 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9fd7ae61-20c0-41bb-93e6-f209748133ef-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 26 17:25:29 crc kubenswrapper[4856]: I0126 17:25:29.976088 4856 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9fd7ae61-20c0-41bb-93e6-f209748133ef-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 17:25:29 crc kubenswrapper[4856]: I0126 17:25:29.976105 4856 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9fd7ae61-20c0-41bb-93e6-f209748133ef-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 26 17:25:29 crc kubenswrapper[4856]: I0126 17:25:29.977926 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fd7ae61-20c0-41bb-93e6-f209748133ef-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "9fd7ae61-20c0-41bb-93e6-f209748133ef" (UID: "9fd7ae61-20c0-41bb-93e6-f209748133ef"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:25:29 crc kubenswrapper[4856]: I0126 17:25:29.980418 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fd7ae61-20c0-41bb-93e6-f209748133ef-builder-dockercfg-8h4xs-push" (OuterVolumeSpecName: "builder-dockercfg-8h4xs-push") pod "9fd7ae61-20c0-41bb-93e6-f209748133ef" (UID: "9fd7ae61-20c0-41bb-93e6-f209748133ef"). InnerVolumeSpecName "builder-dockercfg-8h4xs-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:25:29 crc kubenswrapper[4856]: I0126 17:25:29.980543 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fd7ae61-20c0-41bb-93e6-f209748133ef-kube-api-access-8cq27" (OuterVolumeSpecName: "kube-api-access-8cq27") pod "9fd7ae61-20c0-41bb-93e6-f209748133ef" (UID: "9fd7ae61-20c0-41bb-93e6-f209748133ef"). InnerVolumeSpecName "kube-api-access-8cq27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:25:29 crc kubenswrapper[4856]: I0126 17:25:29.981039 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fd7ae61-20c0-41bb-93e6-f209748133ef-builder-dockercfg-8h4xs-pull" (OuterVolumeSpecName: "builder-dockercfg-8h4xs-pull") pod "9fd7ae61-20c0-41bb-93e6-f209748133ef" (UID: "9fd7ae61-20c0-41bb-93e6-f209748133ef"). InnerVolumeSpecName "builder-dockercfg-8h4xs-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:25:30 crc kubenswrapper[4856]: I0126 17:25:30.060554 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fd7ae61-20c0-41bb-93e6-f209748133ef-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "9fd7ae61-20c0-41bb-93e6-f209748133ef" (UID: "9fd7ae61-20c0-41bb-93e6-f209748133ef"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:25:30 crc kubenswrapper[4856]: I0126 17:25:30.076993 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cq27\" (UniqueName: \"kubernetes.io/projected/9fd7ae61-20c0-41bb-93e6-f209748133ef-kube-api-access-8cq27\") on node \"crc\" DevicePath \"\"" Jan 26 17:25:30 crc kubenswrapper[4856]: I0126 17:25:30.077028 4856 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9fd7ae61-20c0-41bb-93e6-f209748133ef-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 26 17:25:30 crc kubenswrapper[4856]: I0126 17:25:30.077041 4856 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9fd7ae61-20c0-41bb-93e6-f209748133ef-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 26 17:25:30 crc kubenswrapper[4856]: I0126 17:25:30.077053 4856 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/9fd7ae61-20c0-41bb-93e6-f209748133ef-builder-dockercfg-8h4xs-push\") on node \"crc\" DevicePath \"\"" Jan 26 17:25:30 crc kubenswrapper[4856]: I0126 17:25:30.077066 4856 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/9fd7ae61-20c0-41bb-93e6-f209748133ef-builder-dockercfg-8h4xs-pull\") on node \"crc\" DevicePath \"\"" Jan 26 17:25:30 crc kubenswrapper[4856]: I0126 17:25:30.530510 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-4-build_9fd7ae61-20c0-41bb-93e6-f209748133ef/docker-build/0.log" Jan 26 17:25:30 crc kubenswrapper[4856]: I0126 17:25:30.531861 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-4-build" event={"ID":"9fd7ae61-20c0-41bb-93e6-f209748133ef","Type":"ContainerDied","Data":"1114ab5ebbed8464e1df1efeb4253f6b4ab7782e2e7ad6231ca36374142b6d8b"} Jan 26 17:25:30 crc kubenswrapper[4856]: I0126 17:25:30.531908 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1114ab5ebbed8464e1df1efeb4253f6b4ab7782e2e7ad6231ca36374142b6d8b" Jan 26 17:25:30 crc kubenswrapper[4856]: I0126 17:25:30.531948 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-4-build" Jan 26 17:25:30 crc kubenswrapper[4856]: I0126 17:25:30.608249 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fd7ae61-20c0-41bb-93e6-f209748133ef-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "9fd7ae61-20c0-41bb-93e6-f209748133ef" (UID: "9fd7ae61-20c0-41bb-93e6-f209748133ef"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:25:30 crc kubenswrapper[4856]: I0126 17:25:30.687613 4856 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9fd7ae61-20c0-41bb-93e6-f209748133ef-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.145750 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-5-build"] Jan 26 17:25:40 crc kubenswrapper[4856]: E0126 17:25:40.146589 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fd7ae61-20c0-41bb-93e6-f209748133ef" containerName="manage-dockerfile" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.146606 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fd7ae61-20c0-41bb-93e6-f209748133ef" containerName="manage-dockerfile" Jan 26 17:25:40 crc kubenswrapper[4856]: E0126 17:25:40.146620 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fd7ae61-20c0-41bb-93e6-f209748133ef" containerName="git-clone" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.146626 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fd7ae61-20c0-41bb-93e6-f209748133ef" containerName="git-clone" Jan 26 17:25:40 crc kubenswrapper[4856]: E0126 17:25:40.146635 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5167c3a3-ec3c-4f30-9410-ebc60d61f515" containerName="extract-content" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.146641 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="5167c3a3-ec3c-4f30-9410-ebc60d61f515" containerName="extract-content" Jan 26 17:25:40 crc kubenswrapper[4856]: E0126 17:25:40.146651 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5167c3a3-ec3c-4f30-9410-ebc60d61f515" containerName="registry-server" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.146658 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="5167c3a3-ec3c-4f30-9410-ebc60d61f515" containerName="registry-server" Jan 26 17:25:40 crc kubenswrapper[4856]: E0126 17:25:40.146668 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5167c3a3-ec3c-4f30-9410-ebc60d61f515" containerName="extract-utilities" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.146673 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="5167c3a3-ec3c-4f30-9410-ebc60d61f515" containerName="extract-utilities" Jan 26 17:25:40 crc kubenswrapper[4856]: E0126 17:25:40.146682 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fd7ae61-20c0-41bb-93e6-f209748133ef" containerName="docker-build" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.146689 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fd7ae61-20c0-41bb-93e6-f209748133ef" containerName="docker-build" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.146877 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="5167c3a3-ec3c-4f30-9410-ebc60d61f515" containerName="registry-server" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.146894 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fd7ae61-20c0-41bb-93e6-f209748133ef" containerName="docker-build" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.148023 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-5-build" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.150129 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-5-global-ca" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.150163 4856 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-8h4xs" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.150129 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-5-ca" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.150467 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-5-sys-config" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.164768 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-5-build"] Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.377321 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/77516bf5-33dc-4722-85d1-8cd196e055b3-buildworkdir\") pod \"prometheus-webhook-snmp-5-build\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " pod="service-telemetry/prometheus-webhook-snmp-5-build" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.377474 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/77516bf5-33dc-4722-85d1-8cd196e055b3-node-pullsecrets\") pod \"prometheus-webhook-snmp-5-build\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " pod="service-telemetry/prometheus-webhook-snmp-5-build" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.377558 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/77516bf5-33dc-4722-85d1-8cd196e055b3-builder-dockercfg-8h4xs-pull\") pod \"prometheus-webhook-snmp-5-build\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " pod="service-telemetry/prometheus-webhook-snmp-5-build" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.377603 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/77516bf5-33dc-4722-85d1-8cd196e055b3-build-system-configs\") pod \"prometheus-webhook-snmp-5-build\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " pod="service-telemetry/prometheus-webhook-snmp-5-build" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.377842 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/77516bf5-33dc-4722-85d1-8cd196e055b3-container-storage-run\") pod \"prometheus-webhook-snmp-5-build\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " pod="service-telemetry/prometheus-webhook-snmp-5-build" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.377897 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/77516bf5-33dc-4722-85d1-8cd196e055b3-build-blob-cache\") pod \"prometheus-webhook-snmp-5-build\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " pod="service-telemetry/prometheus-webhook-snmp-5-build" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.377935 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb88j\" (UniqueName: \"kubernetes.io/projected/77516bf5-33dc-4722-85d1-8cd196e055b3-kube-api-access-vb88j\") pod \"prometheus-webhook-snmp-5-build\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " pod="service-telemetry/prometheus-webhook-snmp-5-build" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.377983 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77516bf5-33dc-4722-85d1-8cd196e055b3-build-ca-bundles\") pod \"prometheus-webhook-snmp-5-build\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " pod="service-telemetry/prometheus-webhook-snmp-5-build" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.378068 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77516bf5-33dc-4722-85d1-8cd196e055b3-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-5-build\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " pod="service-telemetry/prometheus-webhook-snmp-5-build" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.378084 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/77516bf5-33dc-4722-85d1-8cd196e055b3-buildcachedir\") pod \"prometheus-webhook-snmp-5-build\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " pod="service-telemetry/prometheus-webhook-snmp-5-build" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.378103 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/77516bf5-33dc-4722-85d1-8cd196e055b3-builder-dockercfg-8h4xs-push\") pod \"prometheus-webhook-snmp-5-build\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " pod="service-telemetry/prometheus-webhook-snmp-5-build" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.378127 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/77516bf5-33dc-4722-85d1-8cd196e055b3-container-storage-root\") pod \"prometheus-webhook-snmp-5-build\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " pod="service-telemetry/prometheus-webhook-snmp-5-build" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.479723 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/77516bf5-33dc-4722-85d1-8cd196e055b3-build-blob-cache\") pod \"prometheus-webhook-snmp-5-build\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " pod="service-telemetry/prometheus-webhook-snmp-5-build" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.479794 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb88j\" (UniqueName: \"kubernetes.io/projected/77516bf5-33dc-4722-85d1-8cd196e055b3-kube-api-access-vb88j\") pod \"prometheus-webhook-snmp-5-build\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " pod="service-telemetry/prometheus-webhook-snmp-5-build" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.479850 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77516bf5-33dc-4722-85d1-8cd196e055b3-build-ca-bundles\") pod \"prometheus-webhook-snmp-5-build\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " pod="service-telemetry/prometheus-webhook-snmp-5-build" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.479983 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77516bf5-33dc-4722-85d1-8cd196e055b3-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-5-build\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " pod="service-telemetry/prometheus-webhook-snmp-5-build" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.480037 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/77516bf5-33dc-4722-85d1-8cd196e055b3-buildcachedir\") pod \"prometheus-webhook-snmp-5-build\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " pod="service-telemetry/prometheus-webhook-snmp-5-build" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.480074 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/77516bf5-33dc-4722-85d1-8cd196e055b3-builder-dockercfg-8h4xs-push\") pod \"prometheus-webhook-snmp-5-build\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " pod="service-telemetry/prometheus-webhook-snmp-5-build" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.480097 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/77516bf5-33dc-4722-85d1-8cd196e055b3-container-storage-root\") pod \"prometheus-webhook-snmp-5-build\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " pod="service-telemetry/prometheus-webhook-snmp-5-build" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.480124 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/77516bf5-33dc-4722-85d1-8cd196e055b3-buildworkdir\") pod \"prometheus-webhook-snmp-5-build\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " pod="service-telemetry/prometheus-webhook-snmp-5-build" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.480183 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/77516bf5-33dc-4722-85d1-8cd196e055b3-node-pullsecrets\") pod \"prometheus-webhook-snmp-5-build\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " pod="service-telemetry/prometheus-webhook-snmp-5-build" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.480220 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/77516bf5-33dc-4722-85d1-8cd196e055b3-builder-dockercfg-8h4xs-pull\") pod \"prometheus-webhook-snmp-5-build\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " pod="service-telemetry/prometheus-webhook-snmp-5-build" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.480248 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/77516bf5-33dc-4722-85d1-8cd196e055b3-build-system-configs\") pod \"prometheus-webhook-snmp-5-build\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " pod="service-telemetry/prometheus-webhook-snmp-5-build" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.480272 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/77516bf5-33dc-4722-85d1-8cd196e055b3-container-storage-run\") pod \"prometheus-webhook-snmp-5-build\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " pod="service-telemetry/prometheus-webhook-snmp-5-build" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.480582 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/77516bf5-33dc-4722-85d1-8cd196e055b3-node-pullsecrets\") pod \"prometheus-webhook-snmp-5-build\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " pod="service-telemetry/prometheus-webhook-snmp-5-build" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.480612 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/77516bf5-33dc-4722-85d1-8cd196e055b3-build-blob-cache\") pod \"prometheus-webhook-snmp-5-build\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " pod="service-telemetry/prometheus-webhook-snmp-5-build" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.480711 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/77516bf5-33dc-4722-85d1-8cd196e055b3-container-storage-run\") pod \"prometheus-webhook-snmp-5-build\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " pod="service-telemetry/prometheus-webhook-snmp-5-build" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.480864 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77516bf5-33dc-4722-85d1-8cd196e055b3-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-5-build\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " pod="service-telemetry/prometheus-webhook-snmp-5-build" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.480901 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/77516bf5-33dc-4722-85d1-8cd196e055b3-buildcachedir\") pod \"prometheus-webhook-snmp-5-build\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " pod="service-telemetry/prometheus-webhook-snmp-5-build" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.481003 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/77516bf5-33dc-4722-85d1-8cd196e055b3-buildworkdir\") pod \"prometheus-webhook-snmp-5-build\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " pod="service-telemetry/prometheus-webhook-snmp-5-build" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.481210 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/77516bf5-33dc-4722-85d1-8cd196e055b3-container-storage-root\") pod \"prometheus-webhook-snmp-5-build\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " pod="service-telemetry/prometheus-webhook-snmp-5-build" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.481415 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/77516bf5-33dc-4722-85d1-8cd196e055b3-build-system-configs\") pod \"prometheus-webhook-snmp-5-build\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " pod="service-telemetry/prometheus-webhook-snmp-5-build" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.482384 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77516bf5-33dc-4722-85d1-8cd196e055b3-build-ca-bundles\") pod \"prometheus-webhook-snmp-5-build\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " pod="service-telemetry/prometheus-webhook-snmp-5-build" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.487068 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/77516bf5-33dc-4722-85d1-8cd196e055b3-builder-dockercfg-8h4xs-pull\") pod \"prometheus-webhook-snmp-5-build\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " pod="service-telemetry/prometheus-webhook-snmp-5-build" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.487482 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/77516bf5-33dc-4722-85d1-8cd196e055b3-builder-dockercfg-8h4xs-push\") pod \"prometheus-webhook-snmp-5-build\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " pod="service-telemetry/prometheus-webhook-snmp-5-build" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.506890 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb88j\" (UniqueName: \"kubernetes.io/projected/77516bf5-33dc-4722-85d1-8cd196e055b3-kube-api-access-vb88j\") pod \"prometheus-webhook-snmp-5-build\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " pod="service-telemetry/prometheus-webhook-snmp-5-build" Jan 26 17:25:40 crc kubenswrapper[4856]: I0126 17:25:40.769464 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-5-build" Jan 26 17:25:41 crc kubenswrapper[4856]: I0126 17:25:41.248592 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-5-build"] Jan 26 17:25:41 crc kubenswrapper[4856]: I0126 17:25:41.620558 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-5-build" event={"ID":"77516bf5-33dc-4722-85d1-8cd196e055b3","Type":"ContainerStarted","Data":"4bf608fd02a4ba9adf66de6dc6325d52af81af1b294ec61636551feb5cd91850"} Jan 26 17:25:41 crc kubenswrapper[4856]: I0126 17:25:41.620612 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-5-build" event={"ID":"77516bf5-33dc-4722-85d1-8cd196e055b3","Type":"ContainerStarted","Data":"73566cebe45397c2722640fc7ffd2e04eb8a444fc2c63f9f96879b8d7aec440b"} Jan 26 17:25:42 crc kubenswrapper[4856]: I0126 17:25:42.629454 4856 generic.go:334] "Generic (PLEG): container finished" podID="77516bf5-33dc-4722-85d1-8cd196e055b3" containerID="4bf608fd02a4ba9adf66de6dc6325d52af81af1b294ec61636551feb5cd91850" exitCode=0 Jan 26 17:25:42 crc kubenswrapper[4856]: I0126 17:25:42.629577 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-5-build" event={"ID":"77516bf5-33dc-4722-85d1-8cd196e055b3","Type":"ContainerDied","Data":"4bf608fd02a4ba9adf66de6dc6325d52af81af1b294ec61636551feb5cd91850"} Jan 26 17:25:43 crc kubenswrapper[4856]: I0126 17:25:43.637489 4856 generic.go:334] "Generic (PLEG): container finished" podID="77516bf5-33dc-4722-85d1-8cd196e055b3" containerID="77d3e37cbe082f8e8cf8830aed97e12cbb79797a661886cf41da5c29c802032d" exitCode=0 Jan 26 17:25:43 crc kubenswrapper[4856]: I0126 17:25:43.637567 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-5-build" event={"ID":"77516bf5-33dc-4722-85d1-8cd196e055b3","Type":"ContainerDied","Data":"77d3e37cbe082f8e8cf8830aed97e12cbb79797a661886cf41da5c29c802032d"} Jan 26 17:25:43 crc kubenswrapper[4856]: I0126 17:25:43.823453 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-5-build_77516bf5-33dc-4722-85d1-8cd196e055b3/manage-dockerfile/0.log" Jan 26 17:25:44 crc kubenswrapper[4856]: I0126 17:25:44.647751 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-5-build" event={"ID":"77516bf5-33dc-4722-85d1-8cd196e055b3","Type":"ContainerStarted","Data":"085cbd8b2c148470c95639832aef6196db9721aff2d0bb845f95a05fa42d58e1"} Jan 26 17:25:44 crc kubenswrapper[4856]: I0126 17:25:44.692550 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-5-build" podStartSLOduration=4.692501115 podStartE2EDuration="4.692501115s" podCreationTimestamp="2026-01-26 17:25:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:25:44.687716118 +0000 UTC m=+1640.640970119" watchObservedRunningTime="2026-01-26 17:25:44.692501115 +0000 UTC m=+1640.645755096" Jan 26 17:25:56 crc kubenswrapper[4856]: I0126 17:25:56.938347 4856 patch_prober.go:28] interesting pod/machine-config-daemon-xm9cq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:25:56 crc kubenswrapper[4856]: I0126 17:25:56.938861 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:26:13 crc kubenswrapper[4856]: I0126 17:26:13.863440 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-5-build_77516bf5-33dc-4722-85d1-8cd196e055b3/docker-build/0.log" Jan 26 17:26:13 crc kubenswrapper[4856]: I0126 17:26:13.865870 4856 generic.go:334] "Generic (PLEG): container finished" podID="77516bf5-33dc-4722-85d1-8cd196e055b3" containerID="085cbd8b2c148470c95639832aef6196db9721aff2d0bb845f95a05fa42d58e1" exitCode=1 Jan 26 17:26:13 crc kubenswrapper[4856]: I0126 17:26:13.865919 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-5-build" event={"ID":"77516bf5-33dc-4722-85d1-8cd196e055b3","Type":"ContainerDied","Data":"085cbd8b2c148470c95639832aef6196db9721aff2d0bb845f95a05fa42d58e1"} Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.120368 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-5-build_77516bf5-33dc-4722-85d1-8cd196e055b3/docker-build/0.log" Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.121771 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-5-build" Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.248832 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/77516bf5-33dc-4722-85d1-8cd196e055b3-builder-dockercfg-8h4xs-pull\") pod \"77516bf5-33dc-4722-85d1-8cd196e055b3\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.248891 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/77516bf5-33dc-4722-85d1-8cd196e055b3-node-pullsecrets\") pod \"77516bf5-33dc-4722-85d1-8cd196e055b3\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.248935 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/77516bf5-33dc-4722-85d1-8cd196e055b3-builder-dockercfg-8h4xs-push\") pod \"77516bf5-33dc-4722-85d1-8cd196e055b3\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.248965 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/77516bf5-33dc-4722-85d1-8cd196e055b3-build-system-configs\") pod \"77516bf5-33dc-4722-85d1-8cd196e055b3\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.248996 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77516bf5-33dc-4722-85d1-8cd196e055b3-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "77516bf5-33dc-4722-85d1-8cd196e055b3" (UID: "77516bf5-33dc-4722-85d1-8cd196e055b3"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.249020 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/77516bf5-33dc-4722-85d1-8cd196e055b3-container-storage-run\") pod \"77516bf5-33dc-4722-85d1-8cd196e055b3\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.249158 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/77516bf5-33dc-4722-85d1-8cd196e055b3-build-blob-cache\") pod \"77516bf5-33dc-4722-85d1-8cd196e055b3\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.249190 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77516bf5-33dc-4722-85d1-8cd196e055b3-build-ca-bundles\") pod \"77516bf5-33dc-4722-85d1-8cd196e055b3\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.249214 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/77516bf5-33dc-4722-85d1-8cd196e055b3-buildworkdir\") pod \"77516bf5-33dc-4722-85d1-8cd196e055b3\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.249236 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77516bf5-33dc-4722-85d1-8cd196e055b3-build-proxy-ca-bundles\") pod \"77516bf5-33dc-4722-85d1-8cd196e055b3\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.249259 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb88j\" (UniqueName: \"kubernetes.io/projected/77516bf5-33dc-4722-85d1-8cd196e055b3-kube-api-access-vb88j\") pod \"77516bf5-33dc-4722-85d1-8cd196e055b3\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.249280 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/77516bf5-33dc-4722-85d1-8cd196e055b3-buildcachedir\") pod \"77516bf5-33dc-4722-85d1-8cd196e055b3\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.249305 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/77516bf5-33dc-4722-85d1-8cd196e055b3-container-storage-root\") pod \"77516bf5-33dc-4722-85d1-8cd196e055b3\" (UID: \"77516bf5-33dc-4722-85d1-8cd196e055b3\") " Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.249516 4856 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/77516bf5-33dc-4722-85d1-8cd196e055b3-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.250480 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77516bf5-33dc-4722-85d1-8cd196e055b3-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "77516bf5-33dc-4722-85d1-8cd196e055b3" (UID: "77516bf5-33dc-4722-85d1-8cd196e055b3"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.250556 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77516bf5-33dc-4722-85d1-8cd196e055b3-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "77516bf5-33dc-4722-85d1-8cd196e055b3" (UID: "77516bf5-33dc-4722-85d1-8cd196e055b3"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.251364 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77516bf5-33dc-4722-85d1-8cd196e055b3-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "77516bf5-33dc-4722-85d1-8cd196e055b3" (UID: "77516bf5-33dc-4722-85d1-8cd196e055b3"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.251973 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77516bf5-33dc-4722-85d1-8cd196e055b3-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "77516bf5-33dc-4722-85d1-8cd196e055b3" (UID: "77516bf5-33dc-4722-85d1-8cd196e055b3"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.253929 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77516bf5-33dc-4722-85d1-8cd196e055b3-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "77516bf5-33dc-4722-85d1-8cd196e055b3" (UID: "77516bf5-33dc-4722-85d1-8cd196e055b3"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.254169 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77516bf5-33dc-4722-85d1-8cd196e055b3-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "77516bf5-33dc-4722-85d1-8cd196e055b3" (UID: "77516bf5-33dc-4722-85d1-8cd196e055b3"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.256737 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77516bf5-33dc-4722-85d1-8cd196e055b3-kube-api-access-vb88j" (OuterVolumeSpecName: "kube-api-access-vb88j") pod "77516bf5-33dc-4722-85d1-8cd196e055b3" (UID: "77516bf5-33dc-4722-85d1-8cd196e055b3"). InnerVolumeSpecName "kube-api-access-vb88j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.257168 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77516bf5-33dc-4722-85d1-8cd196e055b3-builder-dockercfg-8h4xs-pull" (OuterVolumeSpecName: "builder-dockercfg-8h4xs-pull") pod "77516bf5-33dc-4722-85d1-8cd196e055b3" (UID: "77516bf5-33dc-4722-85d1-8cd196e055b3"). InnerVolumeSpecName "builder-dockercfg-8h4xs-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.257676 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77516bf5-33dc-4722-85d1-8cd196e055b3-builder-dockercfg-8h4xs-push" (OuterVolumeSpecName: "builder-dockercfg-8h4xs-push") pod "77516bf5-33dc-4722-85d1-8cd196e055b3" (UID: "77516bf5-33dc-4722-85d1-8cd196e055b3"). InnerVolumeSpecName "builder-dockercfg-8h4xs-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.350826 4856 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77516bf5-33dc-4722-85d1-8cd196e055b3-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.350864 4856 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/77516bf5-33dc-4722-85d1-8cd196e055b3-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.350874 4856 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77516bf5-33dc-4722-85d1-8cd196e055b3-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.350886 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vb88j\" (UniqueName: \"kubernetes.io/projected/77516bf5-33dc-4722-85d1-8cd196e055b3-kube-api-access-vb88j\") on node \"crc\" DevicePath \"\"" Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.350897 4856 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/77516bf5-33dc-4722-85d1-8cd196e055b3-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.350907 4856 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8h4xs-pull\" (UniqueName: \"kubernetes.io/secret/77516bf5-33dc-4722-85d1-8cd196e055b3-builder-dockercfg-8h4xs-pull\") on node \"crc\" DevicePath \"\"" Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.350917 4856 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-8h4xs-push\" (UniqueName: \"kubernetes.io/secret/77516bf5-33dc-4722-85d1-8cd196e055b3-builder-dockercfg-8h4xs-push\") on node \"crc\" DevicePath \"\"" Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.351126 4856 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/77516bf5-33dc-4722-85d1-8cd196e055b3-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.351139 4856 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/77516bf5-33dc-4722-85d1-8cd196e055b3-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.353121 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77516bf5-33dc-4722-85d1-8cd196e055b3-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "77516bf5-33dc-4722-85d1-8cd196e055b3" (UID: "77516bf5-33dc-4722-85d1-8cd196e055b3"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.452776 4856 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/77516bf5-33dc-4722-85d1-8cd196e055b3-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.822432 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77516bf5-33dc-4722-85d1-8cd196e055b3-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "77516bf5-33dc-4722-85d1-8cd196e055b3" (UID: "77516bf5-33dc-4722-85d1-8cd196e055b3"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.857412 4856 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/77516bf5-33dc-4722-85d1-8cd196e055b3-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.885062 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-5-build_77516bf5-33dc-4722-85d1-8cd196e055b3/docker-build/0.log" Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.885908 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-5-build" event={"ID":"77516bf5-33dc-4722-85d1-8cd196e055b3","Type":"ContainerDied","Data":"73566cebe45397c2722640fc7ffd2e04eb8a444fc2c63f9f96879b8d7aec440b"} Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.885941 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73566cebe45397c2722640fc7ffd2e04eb8a444fc2c63f9f96879b8d7aec440b" Jan 26 17:26:15 crc kubenswrapper[4856]: I0126 17:26:15.885980 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-5-build" Jan 26 17:26:26 crc kubenswrapper[4856]: I0126 17:26:26.939507 4856 patch_prober.go:28] interesting pod/machine-config-daemon-xm9cq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:26:26 crc kubenswrapper[4856]: I0126 17:26:26.940155 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:26:29 crc kubenswrapper[4856]: I0126 17:26:29.671606 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hwx7g"] Jan 26 17:26:29 crc kubenswrapper[4856]: E0126 17:26:29.672229 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77516bf5-33dc-4722-85d1-8cd196e055b3" containerName="manage-dockerfile" Jan 26 17:26:29 crc kubenswrapper[4856]: I0126 17:26:29.672246 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="77516bf5-33dc-4722-85d1-8cd196e055b3" containerName="manage-dockerfile" Jan 26 17:26:29 crc kubenswrapper[4856]: E0126 17:26:29.672264 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77516bf5-33dc-4722-85d1-8cd196e055b3" containerName="git-clone" Jan 26 17:26:29 crc kubenswrapper[4856]: I0126 17:26:29.672273 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="77516bf5-33dc-4722-85d1-8cd196e055b3" containerName="git-clone" Jan 26 17:26:29 crc kubenswrapper[4856]: E0126 17:26:29.672295 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77516bf5-33dc-4722-85d1-8cd196e055b3" containerName="docker-build" Jan 26 17:26:29 crc kubenswrapper[4856]: I0126 17:26:29.672302 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="77516bf5-33dc-4722-85d1-8cd196e055b3" containerName="docker-build" Jan 26 17:26:29 crc kubenswrapper[4856]: I0126 17:26:29.672403 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="77516bf5-33dc-4722-85d1-8cd196e055b3" containerName="docker-build" Jan 26 17:26:29 crc kubenswrapper[4856]: I0126 17:26:29.674317 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hwx7g" Jan 26 17:26:29 crc kubenswrapper[4856]: I0126 17:26:29.679261 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hwx7g"] Jan 26 17:26:29 crc kubenswrapper[4856]: I0126 17:26:29.857500 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44rwk\" (UniqueName: \"kubernetes.io/projected/a26e6806-50dc-4680-a6d4-01f73fce49da-kube-api-access-44rwk\") pod \"certified-operators-hwx7g\" (UID: \"a26e6806-50dc-4680-a6d4-01f73fce49da\") " pod="openshift-marketplace/certified-operators-hwx7g" Jan 26 17:26:29 crc kubenswrapper[4856]: I0126 17:26:29.857578 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a26e6806-50dc-4680-a6d4-01f73fce49da-catalog-content\") pod \"certified-operators-hwx7g\" (UID: \"a26e6806-50dc-4680-a6d4-01f73fce49da\") " pod="openshift-marketplace/certified-operators-hwx7g" Jan 26 17:26:29 crc kubenswrapper[4856]: I0126 17:26:29.857612 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a26e6806-50dc-4680-a6d4-01f73fce49da-utilities\") pod \"certified-operators-hwx7g\" (UID: \"a26e6806-50dc-4680-a6d4-01f73fce49da\") " pod="openshift-marketplace/certified-operators-hwx7g" Jan 26 17:26:29 crc kubenswrapper[4856]: I0126 17:26:29.959389 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44rwk\" (UniqueName: \"kubernetes.io/projected/a26e6806-50dc-4680-a6d4-01f73fce49da-kube-api-access-44rwk\") pod \"certified-operators-hwx7g\" (UID: \"a26e6806-50dc-4680-a6d4-01f73fce49da\") " pod="openshift-marketplace/certified-operators-hwx7g" Jan 26 17:26:29 crc kubenswrapper[4856]: I0126 17:26:29.959444 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a26e6806-50dc-4680-a6d4-01f73fce49da-catalog-content\") pod \"certified-operators-hwx7g\" (UID: \"a26e6806-50dc-4680-a6d4-01f73fce49da\") " pod="openshift-marketplace/certified-operators-hwx7g" Jan 26 17:26:29 crc kubenswrapper[4856]: I0126 17:26:29.959468 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a26e6806-50dc-4680-a6d4-01f73fce49da-utilities\") pod \"certified-operators-hwx7g\" (UID: \"a26e6806-50dc-4680-a6d4-01f73fce49da\") " pod="openshift-marketplace/certified-operators-hwx7g" Jan 26 17:26:29 crc kubenswrapper[4856]: I0126 17:26:29.960067 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a26e6806-50dc-4680-a6d4-01f73fce49da-catalog-content\") pod \"certified-operators-hwx7g\" (UID: \"a26e6806-50dc-4680-a6d4-01f73fce49da\") " pod="openshift-marketplace/certified-operators-hwx7g" Jan 26 17:26:29 crc kubenswrapper[4856]: I0126 17:26:29.960236 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a26e6806-50dc-4680-a6d4-01f73fce49da-utilities\") pod \"certified-operators-hwx7g\" (UID: \"a26e6806-50dc-4680-a6d4-01f73fce49da\") " pod="openshift-marketplace/certified-operators-hwx7g" Jan 26 17:26:29 crc kubenswrapper[4856]: I0126 17:26:29.989510 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44rwk\" (UniqueName: \"kubernetes.io/projected/a26e6806-50dc-4680-a6d4-01f73fce49da-kube-api-access-44rwk\") pod \"certified-operators-hwx7g\" (UID: \"a26e6806-50dc-4680-a6d4-01f73fce49da\") " pod="openshift-marketplace/certified-operators-hwx7g" Jan 26 17:26:29 crc kubenswrapper[4856]: I0126 17:26:29.992070 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hwx7g" Jan 26 17:26:30 crc kubenswrapper[4856]: I0126 17:26:30.478559 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hwx7g"] Jan 26 17:26:31 crc kubenswrapper[4856]: I0126 17:26:31.004661 4856 generic.go:334] "Generic (PLEG): container finished" podID="a26e6806-50dc-4680-a6d4-01f73fce49da" containerID="416cabdd5226bae8c40dffba6482cf11182130853f71df6b638445e7408ef751" exitCode=0 Jan 26 17:26:31 crc kubenswrapper[4856]: I0126 17:26:31.004726 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hwx7g" event={"ID":"a26e6806-50dc-4680-a6d4-01f73fce49da","Type":"ContainerDied","Data":"416cabdd5226bae8c40dffba6482cf11182130853f71df6b638445e7408ef751"} Jan 26 17:26:31 crc kubenswrapper[4856]: I0126 17:26:31.005102 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hwx7g" event={"ID":"a26e6806-50dc-4680-a6d4-01f73fce49da","Type":"ContainerStarted","Data":"ecdf439a6fbc2c2d5caa5779f36a413aa1c7d30d9a0374701a3ccdd3bf9dacd1"} Jan 26 17:26:35 crc kubenswrapper[4856]: I0126 17:26:35.046741 4856 generic.go:334] "Generic (PLEG): container finished" podID="a26e6806-50dc-4680-a6d4-01f73fce49da" containerID="5f00836821d46247c6984dd8810c92bcfea2ef9f97d77ed0de71b6b21ef7919c" exitCode=0 Jan 26 17:26:35 crc kubenswrapper[4856]: I0126 17:26:35.046823 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hwx7g" event={"ID":"a26e6806-50dc-4680-a6d4-01f73fce49da","Type":"ContainerDied","Data":"5f00836821d46247c6984dd8810c92bcfea2ef9f97d77ed0de71b6b21ef7919c"} Jan 26 17:26:36 crc kubenswrapper[4856]: I0126 17:26:36.057050 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hwx7g" event={"ID":"a26e6806-50dc-4680-a6d4-01f73fce49da","Type":"ContainerStarted","Data":"9d274f3313f18a3beb64c0ccabe70ee5b097ee7ee5ee0b5b90ae0dc6885d8137"} Jan 26 17:26:36 crc kubenswrapper[4856]: I0126 17:26:36.075520 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hwx7g" podStartSLOduration=2.5638584079999998 podStartE2EDuration="7.075486858s" podCreationTimestamp="2026-01-26 17:26:29 +0000 UTC" firstStartedPulling="2026-01-26 17:26:31.006423583 +0000 UTC m=+1686.959677564" lastFinishedPulling="2026-01-26 17:26:35.518052033 +0000 UTC m=+1691.471306014" observedRunningTime="2026-01-26 17:26:36.072181613 +0000 UTC m=+1692.025435634" watchObservedRunningTime="2026-01-26 17:26:36.075486858 +0000 UTC m=+1692.028740879" Jan 26 17:26:39 crc kubenswrapper[4856]: I0126 17:26:39.993178 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hwx7g" Jan 26 17:26:39 crc kubenswrapper[4856]: I0126 17:26:39.994181 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hwx7g" Jan 26 17:26:40 crc kubenswrapper[4856]: I0126 17:26:40.060713 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hwx7g" Jan 26 17:26:40 crc kubenswrapper[4856]: I0126 17:26:40.142993 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hwx7g" Jan 26 17:26:40 crc kubenswrapper[4856]: I0126 17:26:40.302583 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hwx7g"] Jan 26 17:26:40 crc kubenswrapper[4856]: I0126 17:26:40.362035 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bxhpt"] Jan 26 17:26:40 crc kubenswrapper[4856]: I0126 17:26:40.362354 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bxhpt" podUID="5f34c6a8-6023-480c-a25e-46f8c4f3766b" containerName="registry-server" containerID="cri-o://094471a22cb2be3806f3e0c5d6e07e36d6a5a423b089fe73b28707ddde1dde10" gracePeriod=2 Jan 26 17:26:41 crc kubenswrapper[4856]: I0126 17:26:41.908862 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bxhpt" Jan 26 17:26:42 crc kubenswrapper[4856]: I0126 17:26:42.031357 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f34c6a8-6023-480c-a25e-46f8c4f3766b-utilities\") pod \"5f34c6a8-6023-480c-a25e-46f8c4f3766b\" (UID: \"5f34c6a8-6023-480c-a25e-46f8c4f3766b\") " Jan 26 17:26:42 crc kubenswrapper[4856]: I0126 17:26:42.031486 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f34c6a8-6023-480c-a25e-46f8c4f3766b-catalog-content\") pod \"5f34c6a8-6023-480c-a25e-46f8c4f3766b\" (UID: \"5f34c6a8-6023-480c-a25e-46f8c4f3766b\") " Jan 26 17:26:42 crc kubenswrapper[4856]: I0126 17:26:42.031536 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crdtn\" (UniqueName: \"kubernetes.io/projected/5f34c6a8-6023-480c-a25e-46f8c4f3766b-kube-api-access-crdtn\") pod \"5f34c6a8-6023-480c-a25e-46f8c4f3766b\" (UID: \"5f34c6a8-6023-480c-a25e-46f8c4f3766b\") " Jan 26 17:26:42 crc kubenswrapper[4856]: I0126 17:26:42.032082 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f34c6a8-6023-480c-a25e-46f8c4f3766b-utilities" (OuterVolumeSpecName: "utilities") pod "5f34c6a8-6023-480c-a25e-46f8c4f3766b" (UID: "5f34c6a8-6023-480c-a25e-46f8c4f3766b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:26:42 crc kubenswrapper[4856]: I0126 17:26:42.037340 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f34c6a8-6023-480c-a25e-46f8c4f3766b-kube-api-access-crdtn" (OuterVolumeSpecName: "kube-api-access-crdtn") pod "5f34c6a8-6023-480c-a25e-46f8c4f3766b" (UID: "5f34c6a8-6023-480c-a25e-46f8c4f3766b"). InnerVolumeSpecName "kube-api-access-crdtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:26:42 crc kubenswrapper[4856]: I0126 17:26:42.075665 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f34c6a8-6023-480c-a25e-46f8c4f3766b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f34c6a8-6023-480c-a25e-46f8c4f3766b" (UID: "5f34c6a8-6023-480c-a25e-46f8c4f3766b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:26:42 crc kubenswrapper[4856]: I0126 17:26:42.104354 4856 generic.go:334] "Generic (PLEG): container finished" podID="5f34c6a8-6023-480c-a25e-46f8c4f3766b" containerID="094471a22cb2be3806f3e0c5d6e07e36d6a5a423b089fe73b28707ddde1dde10" exitCode=0 Jan 26 17:26:42 crc kubenswrapper[4856]: I0126 17:26:42.104758 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bxhpt" event={"ID":"5f34c6a8-6023-480c-a25e-46f8c4f3766b","Type":"ContainerDied","Data":"094471a22cb2be3806f3e0c5d6e07e36d6a5a423b089fe73b28707ddde1dde10"} Jan 26 17:26:42 crc kubenswrapper[4856]: I0126 17:26:42.104842 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bxhpt" Jan 26 17:26:42 crc kubenswrapper[4856]: I0126 17:26:42.104873 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bxhpt" event={"ID":"5f34c6a8-6023-480c-a25e-46f8c4f3766b","Type":"ContainerDied","Data":"ae868d389f2d56b098915bbed54fc03534f4fd1519a0d344eda69f6356db31f0"} Jan 26 17:26:42 crc kubenswrapper[4856]: I0126 17:26:42.104982 4856 scope.go:117] "RemoveContainer" containerID="094471a22cb2be3806f3e0c5d6e07e36d6a5a423b089fe73b28707ddde1dde10" Jan 26 17:26:42 crc kubenswrapper[4856]: I0126 17:26:42.141860 4856 scope.go:117] "RemoveContainer" containerID="ab4dc2168cf3030f71b121144c073ab78a14a965d8feaf1a0de933f786e1cb89" Jan 26 17:26:42 crc kubenswrapper[4856]: I0126 17:26:42.143855 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f34c6a8-6023-480c-a25e-46f8c4f3766b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:26:42 crc kubenswrapper[4856]: I0126 17:26:42.143902 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crdtn\" (UniqueName: \"kubernetes.io/projected/5f34c6a8-6023-480c-a25e-46f8c4f3766b-kube-api-access-crdtn\") on node \"crc\" DevicePath \"\"" Jan 26 17:26:42 crc kubenswrapper[4856]: I0126 17:26:42.143915 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f34c6a8-6023-480c-a25e-46f8c4f3766b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:26:42 crc kubenswrapper[4856]: I0126 17:26:42.147687 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bxhpt"] Jan 26 17:26:42 crc kubenswrapper[4856]: I0126 17:26:42.152358 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bxhpt"] Jan 26 17:26:42 crc kubenswrapper[4856]: I0126 17:26:42.175498 4856 scope.go:117] "RemoveContainer" containerID="405a5e16f820190605a6762d0b7653fa1a6bedd12b761afcd55093867a05ee57" Jan 26 17:26:42 crc kubenswrapper[4856]: I0126 17:26:42.197875 4856 scope.go:117] "RemoveContainer" containerID="094471a22cb2be3806f3e0c5d6e07e36d6a5a423b089fe73b28707ddde1dde10" Jan 26 17:26:42 crc kubenswrapper[4856]: E0126 17:26:42.198455 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"094471a22cb2be3806f3e0c5d6e07e36d6a5a423b089fe73b28707ddde1dde10\": container with ID starting with 094471a22cb2be3806f3e0c5d6e07e36d6a5a423b089fe73b28707ddde1dde10 not found: ID does not exist" containerID="094471a22cb2be3806f3e0c5d6e07e36d6a5a423b089fe73b28707ddde1dde10" Jan 26 17:26:42 crc kubenswrapper[4856]: I0126 17:26:42.198610 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"094471a22cb2be3806f3e0c5d6e07e36d6a5a423b089fe73b28707ddde1dde10"} err="failed to get container status \"094471a22cb2be3806f3e0c5d6e07e36d6a5a423b089fe73b28707ddde1dde10\": rpc error: code = NotFound desc = could not find container \"094471a22cb2be3806f3e0c5d6e07e36d6a5a423b089fe73b28707ddde1dde10\": container with ID starting with 094471a22cb2be3806f3e0c5d6e07e36d6a5a423b089fe73b28707ddde1dde10 not found: ID does not exist" Jan 26 17:26:42 crc kubenswrapper[4856]: I0126 17:26:42.198713 4856 scope.go:117] "RemoveContainer" containerID="ab4dc2168cf3030f71b121144c073ab78a14a965d8feaf1a0de933f786e1cb89" Jan 26 17:26:42 crc kubenswrapper[4856]: E0126 17:26:42.199256 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab4dc2168cf3030f71b121144c073ab78a14a965d8feaf1a0de933f786e1cb89\": container with ID starting with ab4dc2168cf3030f71b121144c073ab78a14a965d8feaf1a0de933f786e1cb89 not found: ID does not exist" containerID="ab4dc2168cf3030f71b121144c073ab78a14a965d8feaf1a0de933f786e1cb89" Jan 26 17:26:42 crc kubenswrapper[4856]: I0126 17:26:42.199353 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab4dc2168cf3030f71b121144c073ab78a14a965d8feaf1a0de933f786e1cb89"} err="failed to get container status \"ab4dc2168cf3030f71b121144c073ab78a14a965d8feaf1a0de933f786e1cb89\": rpc error: code = NotFound desc = could not find container \"ab4dc2168cf3030f71b121144c073ab78a14a965d8feaf1a0de933f786e1cb89\": container with ID starting with ab4dc2168cf3030f71b121144c073ab78a14a965d8feaf1a0de933f786e1cb89 not found: ID does not exist" Jan 26 17:26:42 crc kubenswrapper[4856]: I0126 17:26:42.199461 4856 scope.go:117] "RemoveContainer" containerID="405a5e16f820190605a6762d0b7653fa1a6bedd12b761afcd55093867a05ee57" Jan 26 17:26:42 crc kubenswrapper[4856]: E0126 17:26:42.199853 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"405a5e16f820190605a6762d0b7653fa1a6bedd12b761afcd55093867a05ee57\": container with ID starting with 405a5e16f820190605a6762d0b7653fa1a6bedd12b761afcd55093867a05ee57 not found: ID does not exist" containerID="405a5e16f820190605a6762d0b7653fa1a6bedd12b761afcd55093867a05ee57" Jan 26 17:26:42 crc kubenswrapper[4856]: I0126 17:26:42.199903 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"405a5e16f820190605a6762d0b7653fa1a6bedd12b761afcd55093867a05ee57"} err="failed to get container status \"405a5e16f820190605a6762d0b7653fa1a6bedd12b761afcd55093867a05ee57\": rpc error: code = NotFound desc = could not find container \"405a5e16f820190605a6762d0b7653fa1a6bedd12b761afcd55093867a05ee57\": container with ID starting with 405a5e16f820190605a6762d0b7653fa1a6bedd12b761afcd55093867a05ee57 not found: ID does not exist" Jan 26 17:26:43 crc kubenswrapper[4856]: I0126 17:26:43.403965 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f34c6a8-6023-480c-a25e-46f8c4f3766b" path="/var/lib/kubelet/pods/5f34c6a8-6023-480c-a25e-46f8c4f3766b/volumes" Jan 26 17:26:56 crc kubenswrapper[4856]: I0126 17:26:56.938996 4856 patch_prober.go:28] interesting pod/machine-config-daemon-xm9cq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:26:56 crc kubenswrapper[4856]: I0126 17:26:56.939787 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:26:56 crc kubenswrapper[4856]: I0126 17:26:56.939854 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" Jan 26 17:26:56 crc kubenswrapper[4856]: I0126 17:26:56.941050 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b8175a0e79754a858867d9a98f2aa6c52214536db6005b6724cf907eb7a891ee"} pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 17:26:56 crc kubenswrapper[4856]: I0126 17:26:56.941219 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" containerName="machine-config-daemon" containerID="cri-o://b8175a0e79754a858867d9a98f2aa6c52214536db6005b6724cf907eb7a891ee" gracePeriod=600 Jan 26 17:26:59 crc kubenswrapper[4856]: E0126 17:26:59.854476 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xm9cq_openshift-machine-config-operator(63c75ede-5170-4db0-811b-5217ef8d72b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" Jan 26 17:27:00 crc kubenswrapper[4856]: I0126 17:27:00.250219 4856 generic.go:334] "Generic (PLEG): container finished" podID="63c75ede-5170-4db0-811b-5217ef8d72b3" containerID="b8175a0e79754a858867d9a98f2aa6c52214536db6005b6724cf907eb7a891ee" exitCode=0 Jan 26 17:27:00 crc kubenswrapper[4856]: I0126 17:27:00.250299 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" event={"ID":"63c75ede-5170-4db0-811b-5217ef8d72b3","Type":"ContainerDied","Data":"b8175a0e79754a858867d9a98f2aa6c52214536db6005b6724cf907eb7a891ee"} Jan 26 17:27:00 crc kubenswrapper[4856]: I0126 17:27:00.250475 4856 scope.go:117] "RemoveContainer" containerID="cda3cdbac0b1e3c460ee9a5617b9c5fd59d4db5c67a69b81c9224934be12563c" Jan 26 17:27:00 crc kubenswrapper[4856]: I0126 17:27:00.251031 4856 scope.go:117] "RemoveContainer" containerID="b8175a0e79754a858867d9a98f2aa6c52214536db6005b6724cf907eb7a891ee" Jan 26 17:27:00 crc kubenswrapper[4856]: E0126 17:27:00.252746 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xm9cq_openshift-machine-config-operator(63c75ede-5170-4db0-811b-5217ef8d72b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" Jan 26 17:27:06 crc kubenswrapper[4856]: I0126 17:27:06.134582 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pxr8g/must-gather-xhb5b"] Jan 26 17:27:06 crc kubenswrapper[4856]: E0126 17:27:06.135599 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f34c6a8-6023-480c-a25e-46f8c4f3766b" containerName="extract-utilities" Jan 26 17:27:06 crc kubenswrapper[4856]: I0126 17:27:06.135630 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f34c6a8-6023-480c-a25e-46f8c4f3766b" containerName="extract-utilities" Jan 26 17:27:06 crc kubenswrapper[4856]: E0126 17:27:06.135653 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f34c6a8-6023-480c-a25e-46f8c4f3766b" containerName="registry-server" Jan 26 17:27:06 crc kubenswrapper[4856]: I0126 17:27:06.135662 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f34c6a8-6023-480c-a25e-46f8c4f3766b" containerName="registry-server" Jan 26 17:27:06 crc kubenswrapper[4856]: E0126 17:27:06.135678 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f34c6a8-6023-480c-a25e-46f8c4f3766b" containerName="extract-content" Jan 26 17:27:06 crc kubenswrapper[4856]: I0126 17:27:06.135688 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f34c6a8-6023-480c-a25e-46f8c4f3766b" containerName="extract-content" Jan 26 17:27:06 crc kubenswrapper[4856]: I0126 17:27:06.135913 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f34c6a8-6023-480c-a25e-46f8c4f3766b" containerName="registry-server" Jan 26 17:27:06 crc kubenswrapper[4856]: I0126 17:27:06.136909 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pxr8g/must-gather-xhb5b" Jan 26 17:27:06 crc kubenswrapper[4856]: I0126 17:27:06.139385 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-pxr8g"/"kube-root-ca.crt" Jan 26 17:27:06 crc kubenswrapper[4856]: I0126 17:27:06.139782 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-pxr8g"/"default-dockercfg-jpqwb" Jan 26 17:27:06 crc kubenswrapper[4856]: I0126 17:27:06.142463 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-pxr8g"/"openshift-service-ca.crt" Jan 26 17:27:06 crc kubenswrapper[4856]: I0126 17:27:06.202948 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-pxr8g/must-gather-xhb5b"] Jan 26 17:27:06 crc kubenswrapper[4856]: I0126 17:27:06.243139 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0190f6f3-2762-486f-8648-e59c9e5179ad-must-gather-output\") pod \"must-gather-xhb5b\" (UID: \"0190f6f3-2762-486f-8648-e59c9e5179ad\") " pod="openshift-must-gather-pxr8g/must-gather-xhb5b" Jan 26 17:27:06 crc kubenswrapper[4856]: I0126 17:27:06.243194 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vms6d\" (UniqueName: \"kubernetes.io/projected/0190f6f3-2762-486f-8648-e59c9e5179ad-kube-api-access-vms6d\") pod \"must-gather-xhb5b\" (UID: \"0190f6f3-2762-486f-8648-e59c9e5179ad\") " pod="openshift-must-gather-pxr8g/must-gather-xhb5b" Jan 26 17:27:06 crc kubenswrapper[4856]: I0126 17:27:06.344151 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vms6d\" (UniqueName: \"kubernetes.io/projected/0190f6f3-2762-486f-8648-e59c9e5179ad-kube-api-access-vms6d\") pod \"must-gather-xhb5b\" (UID: \"0190f6f3-2762-486f-8648-e59c9e5179ad\") " pod="openshift-must-gather-pxr8g/must-gather-xhb5b" Jan 26 17:27:06 crc kubenswrapper[4856]: I0126 17:27:06.344255 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0190f6f3-2762-486f-8648-e59c9e5179ad-must-gather-output\") pod \"must-gather-xhb5b\" (UID: \"0190f6f3-2762-486f-8648-e59c9e5179ad\") " pod="openshift-must-gather-pxr8g/must-gather-xhb5b" Jan 26 17:27:06 crc kubenswrapper[4856]: I0126 17:27:06.344689 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0190f6f3-2762-486f-8648-e59c9e5179ad-must-gather-output\") pod \"must-gather-xhb5b\" (UID: \"0190f6f3-2762-486f-8648-e59c9e5179ad\") " pod="openshift-must-gather-pxr8g/must-gather-xhb5b" Jan 26 17:27:06 crc kubenswrapper[4856]: I0126 17:27:06.371410 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vms6d\" (UniqueName: \"kubernetes.io/projected/0190f6f3-2762-486f-8648-e59c9e5179ad-kube-api-access-vms6d\") pod \"must-gather-xhb5b\" (UID: \"0190f6f3-2762-486f-8648-e59c9e5179ad\") " pod="openshift-must-gather-pxr8g/must-gather-xhb5b" Jan 26 17:27:06 crc kubenswrapper[4856]: I0126 17:27:06.529804 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pxr8g/must-gather-xhb5b" Jan 26 17:27:06 crc kubenswrapper[4856]: I0126 17:27:06.805813 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-pxr8g/must-gather-xhb5b"] Jan 26 17:27:07 crc kubenswrapper[4856]: I0126 17:27:07.304889 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pxr8g/must-gather-xhb5b" event={"ID":"0190f6f3-2762-486f-8648-e59c9e5179ad","Type":"ContainerStarted","Data":"3ce26f76fc446d8a550d3746649cb18a49ea0c18d56a871d55ff91fd1f0a4363"} Jan 26 17:27:12 crc kubenswrapper[4856]: I0126 17:27:12.395456 4856 scope.go:117] "RemoveContainer" containerID="b8175a0e79754a858867d9a98f2aa6c52214536db6005b6724cf907eb7a891ee" Jan 26 17:27:12 crc kubenswrapper[4856]: E0126 17:27:12.396112 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xm9cq_openshift-machine-config-operator(63c75ede-5170-4db0-811b-5217ef8d72b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" Jan 26 17:27:20 crc kubenswrapper[4856]: I0126 17:27:20.449287 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pxr8g/must-gather-xhb5b" event={"ID":"0190f6f3-2762-486f-8648-e59c9e5179ad","Type":"ContainerStarted","Data":"3f464ae4156c51b0e76930516abce0275686a5ccb1e2b35e0204d8a9e05e513a"} Jan 26 17:27:20 crc kubenswrapper[4856]: I0126 17:27:20.449873 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pxr8g/must-gather-xhb5b" event={"ID":"0190f6f3-2762-486f-8648-e59c9e5179ad","Type":"ContainerStarted","Data":"1c6e1c2b24da05f9138f08890b3f71dfa17ade72ec99276a5860cbc9c8b26e99"} Jan 26 17:27:20 crc kubenswrapper[4856]: I0126 17:27:20.467647 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-pxr8g/must-gather-xhb5b" podStartSLOduration=1.827602512 podStartE2EDuration="14.467609024s" podCreationTimestamp="2026-01-26 17:27:06 +0000 UTC" firstStartedPulling="2026-01-26 17:27:06.811204187 +0000 UTC m=+1722.764458168" lastFinishedPulling="2026-01-26 17:27:19.451210699 +0000 UTC m=+1735.404464680" observedRunningTime="2026-01-26 17:27:20.46221228 +0000 UTC m=+1736.415466291" watchObservedRunningTime="2026-01-26 17:27:20.467609024 +0000 UTC m=+1736.420863015" Jan 26 17:27:25 crc kubenswrapper[4856]: I0126 17:27:25.400650 4856 scope.go:117] "RemoveContainer" containerID="b8175a0e79754a858867d9a98f2aa6c52214536db6005b6724cf907eb7a891ee" Jan 26 17:27:25 crc kubenswrapper[4856]: E0126 17:27:25.401561 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xm9cq_openshift-machine-config-operator(63c75ede-5170-4db0-811b-5217ef8d72b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" Jan 26 17:27:40 crc kubenswrapper[4856]: I0126 17:27:40.395739 4856 scope.go:117] "RemoveContainer" containerID="b8175a0e79754a858867d9a98f2aa6c52214536db6005b6724cf907eb7a891ee" Jan 26 17:27:40 crc kubenswrapper[4856]: E0126 17:27:40.396381 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xm9cq_openshift-machine-config-operator(63c75ede-5170-4db0-811b-5217ef8d72b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" Jan 26 17:27:43 crc kubenswrapper[4856]: I0126 17:27:43.868093 4856 scope.go:117] "RemoveContainer" containerID="1f422f325f8fa610d27451f9734bb53bd3fdded1c3e711a971293ebcfe442247" Jan 26 17:27:43 crc kubenswrapper[4856]: I0126 17:27:43.894439 4856 scope.go:117] "RemoveContainer" containerID="37c0850dfb46215185250da7eccd6ad1561ff7e374d7cfc24a1386d6bf8bcf2f" Jan 26 17:27:52 crc kubenswrapper[4856]: I0126 17:27:52.396728 4856 scope.go:117] "RemoveContainer" containerID="b8175a0e79754a858867d9a98f2aa6c52214536db6005b6724cf907eb7a891ee" Jan 26 17:27:52 crc kubenswrapper[4856]: E0126 17:27:52.397457 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xm9cq_openshift-machine-config-operator(63c75ede-5170-4db0-811b-5217ef8d72b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" Jan 26 17:28:03 crc kubenswrapper[4856]: I0126 17:28:03.395329 4856 scope.go:117] "RemoveContainer" containerID="b8175a0e79754a858867d9a98f2aa6c52214536db6005b6724cf907eb7a891ee" Jan 26 17:28:03 crc kubenswrapper[4856]: E0126 17:28:03.396138 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xm9cq_openshift-machine-config-operator(63c75ede-5170-4db0-811b-5217ef8d72b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" Jan 26 17:28:03 crc kubenswrapper[4856]: I0126 17:28:03.771853 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-rrhjv_a1546392-4a69-4b12-8d7e-97450b73b7ca/control-plane-machine-set-operator/0.log" Jan 26 17:28:03 crc kubenswrapper[4856]: I0126 17:28:03.911930 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7xb2b_77a97acb-2908-48fb-8bcd-0647f3e90160/kube-rbac-proxy/0.log" Jan 26 17:28:03 crc kubenswrapper[4856]: I0126 17:28:03.926453 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7xb2b_77a97acb-2908-48fb-8bcd-0647f3e90160/machine-api-operator/0.log" Jan 26 17:28:14 crc kubenswrapper[4856]: I0126 17:28:14.394782 4856 scope.go:117] "RemoveContainer" containerID="b8175a0e79754a858867d9a98f2aa6c52214536db6005b6724cf907eb7a891ee" Jan 26 17:28:14 crc kubenswrapper[4856]: E0126 17:28:14.395461 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xm9cq_openshift-machine-config-operator(63c75ede-5170-4db0-811b-5217ef8d72b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" Jan 26 17:28:17 crc kubenswrapper[4856]: I0126 17:28:17.586730 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-xcqr4_3dc10d6b-aa48-4c7d-afab-45fa62298819/cert-manager-controller/0.log" Jan 26 17:28:17 crc kubenswrapper[4856]: I0126 17:28:17.709096 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-rm9wd_24a9d780-2b57-49d2-9cb9-eac2456ed86d/cert-manager-cainjector/0.log" Jan 26 17:28:17 crc kubenswrapper[4856]: I0126 17:28:17.783922 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-www8b_e9288910-baf7-4cc4-b313-c87b80bfdd3e/cert-manager-webhook/0.log" Jan 26 17:28:28 crc kubenswrapper[4856]: I0126 17:28:28.395754 4856 scope.go:117] "RemoveContainer" containerID="b8175a0e79754a858867d9a98f2aa6c52214536db6005b6724cf907eb7a891ee" Jan 26 17:28:28 crc kubenswrapper[4856]: E0126 17:28:28.396566 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xm9cq_openshift-machine-config-operator(63c75ede-5170-4db0-811b-5217ef8d72b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" Jan 26 17:28:31 crc kubenswrapper[4856]: I0126 17:28:31.270745 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-cq2gx_e31d2d53-8992-45e3-98aa-24ea73236248/prometheus-operator/0.log" Jan 26 17:28:31 crc kubenswrapper[4856]: I0126 17:28:31.516544 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-68d9fdc4dd-cf7wn_7c88687f-1304-4709-b148-a196f0d0190d/prometheus-operator-admission-webhook/0.log" Jan 26 17:28:31 crc kubenswrapper[4856]: I0126 17:28:31.544616 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-68d9fdc4dd-jbq25_766f50ba-0751-4f25-a6db-3b7195e72f55/prometheus-operator-admission-webhook/0.log" Jan 26 17:28:31 crc kubenswrapper[4856]: I0126 17:28:31.786856 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-fpn2h_a4ae7646-2afb-4ada-b8a4-d20a69f87949/operator/0.log" Jan 26 17:28:31 crc kubenswrapper[4856]: I0126 17:28:31.795543 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-5bmfp_bd7597f2-d44b-4e1b-ac60-b409985e3351/perses-operator/0.log" Jan 26 17:28:42 crc kubenswrapper[4856]: I0126 17:28:42.395851 4856 scope.go:117] "RemoveContainer" containerID="b8175a0e79754a858867d9a98f2aa6c52214536db6005b6724cf907eb7a891ee" Jan 26 17:28:42 crc kubenswrapper[4856]: E0126 17:28:42.396796 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xm9cq_openshift-machine-config-operator(63c75ede-5170-4db0-811b-5217ef8d72b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" Jan 26 17:28:45 crc kubenswrapper[4856]: I0126 17:28:45.520258 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ank5l6_6521dc23-8f4e-452f-ae3e-167424fa3ed2/util/0.log" Jan 26 17:28:45 crc kubenswrapper[4856]: I0126 17:28:45.741401 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ank5l6_6521dc23-8f4e-452f-ae3e-167424fa3ed2/pull/0.log" Jan 26 17:28:45 crc kubenswrapper[4856]: I0126 17:28:45.760251 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ank5l6_6521dc23-8f4e-452f-ae3e-167424fa3ed2/util/0.log" Jan 26 17:28:45 crc kubenswrapper[4856]: I0126 17:28:45.766905 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ank5l6_6521dc23-8f4e-452f-ae3e-167424fa3ed2/pull/0.log" Jan 26 17:28:45 crc kubenswrapper[4856]: I0126 17:28:45.934251 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ank5l6_6521dc23-8f4e-452f-ae3e-167424fa3ed2/util/0.log" Jan 26 17:28:45 crc kubenswrapper[4856]: I0126 17:28:45.954036 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ank5l6_6521dc23-8f4e-452f-ae3e-167424fa3ed2/pull/0.log" Jan 26 17:28:45 crc kubenswrapper[4856]: I0126 17:28:45.960684 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ank5l6_6521dc23-8f4e-452f-ae3e-167424fa3ed2/extract/0.log" Jan 26 17:28:46 crc kubenswrapper[4856]: I0126 17:28:46.097738 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxrgwg_7105e655-ab8e-4fc0-b205-0bafaa6d7d91/util/0.log" Jan 26 17:28:46 crc kubenswrapper[4856]: I0126 17:28:46.282138 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxrgwg_7105e655-ab8e-4fc0-b205-0bafaa6d7d91/pull/0.log" Jan 26 17:28:46 crc kubenswrapper[4856]: I0126 17:28:46.282984 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxrgwg_7105e655-ab8e-4fc0-b205-0bafaa6d7d91/pull/0.log" Jan 26 17:28:46 crc kubenswrapper[4856]: I0126 17:28:46.287475 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxrgwg_7105e655-ab8e-4fc0-b205-0bafaa6d7d91/util/0.log" Jan 26 17:28:46 crc kubenswrapper[4856]: I0126 17:28:46.407242 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxrgwg_7105e655-ab8e-4fc0-b205-0bafaa6d7d91/util/0.log" Jan 26 17:28:46 crc kubenswrapper[4856]: I0126 17:28:46.460372 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxrgwg_7105e655-ab8e-4fc0-b205-0bafaa6d7d91/pull/0.log" Jan 26 17:28:46 crc kubenswrapper[4856]: I0126 17:28:46.464896 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxrgwg_7105e655-ab8e-4fc0-b205-0bafaa6d7d91/extract/0.log" Jan 26 17:28:46 crc kubenswrapper[4856]: I0126 17:28:46.586286 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e25mvq_26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a/util/0.log" Jan 26 17:28:46 crc kubenswrapper[4856]: I0126 17:28:46.729152 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e25mvq_26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a/util/0.log" Jan 26 17:28:46 crc kubenswrapper[4856]: I0126 17:28:46.730800 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e25mvq_26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a/pull/0.log" Jan 26 17:28:46 crc kubenswrapper[4856]: I0126 17:28:46.763727 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e25mvq_26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a/pull/0.log" Jan 26 17:28:46 crc kubenswrapper[4856]: I0126 17:28:46.890350 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e25mvq_26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a/util/0.log" Jan 26 17:28:46 crc kubenswrapper[4856]: I0126 17:28:46.892383 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e25mvq_26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a/extract/0.log" Jan 26 17:28:46 crc kubenswrapper[4856]: I0126 17:28:46.904684 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e25mvq_26dfca06-7f67-45e3-ba7c-fb3b0a4ce25a/pull/0.log" Jan 26 17:28:47 crc kubenswrapper[4856]: I0126 17:28:47.031913 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08924r9_64c65d72-3459-4893-a33a-9033e12f188a/util/0.log" Jan 26 17:28:47 crc kubenswrapper[4856]: I0126 17:28:47.211497 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08924r9_64c65d72-3459-4893-a33a-9033e12f188a/util/0.log" Jan 26 17:28:47 crc kubenswrapper[4856]: I0126 17:28:47.212617 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08924r9_64c65d72-3459-4893-a33a-9033e12f188a/pull/0.log" Jan 26 17:28:47 crc kubenswrapper[4856]: I0126 17:28:47.242660 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08924r9_64c65d72-3459-4893-a33a-9033e12f188a/pull/0.log" Jan 26 17:28:47 crc kubenswrapper[4856]: I0126 17:28:47.409089 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08924r9_64c65d72-3459-4893-a33a-9033e12f188a/util/0.log" Jan 26 17:28:47 crc kubenswrapper[4856]: I0126 17:28:47.424215 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08924r9_64c65d72-3459-4893-a33a-9033e12f188a/extract/0.log" Jan 26 17:28:47 crc kubenswrapper[4856]: I0126 17:28:47.439315 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08924r9_64c65d72-3459-4893-a33a-9033e12f188a/pull/0.log" Jan 26 17:28:47 crc kubenswrapper[4856]: I0126 17:28:47.654471 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hwx7g_a26e6806-50dc-4680-a6d4-01f73fce49da/extract-utilities/0.log" Jan 26 17:28:47 crc kubenswrapper[4856]: I0126 17:28:47.864116 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hwx7g_a26e6806-50dc-4680-a6d4-01f73fce49da/extract-content/0.log" Jan 26 17:28:47 crc kubenswrapper[4856]: I0126 17:28:47.873037 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hwx7g_a26e6806-50dc-4680-a6d4-01f73fce49da/extract-utilities/0.log" Jan 26 17:28:47 crc kubenswrapper[4856]: I0126 17:28:47.877360 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hwx7g_a26e6806-50dc-4680-a6d4-01f73fce49da/extract-content/0.log" Jan 26 17:28:48 crc kubenswrapper[4856]: I0126 17:28:48.052150 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hwx7g_a26e6806-50dc-4680-a6d4-01f73fce49da/extract-content/0.log" Jan 26 17:28:48 crc kubenswrapper[4856]: I0126 17:28:48.069110 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hwx7g_a26e6806-50dc-4680-a6d4-01f73fce49da/extract-utilities/0.log" Jan 26 17:28:48 crc kubenswrapper[4856]: I0126 17:28:48.175269 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hwx7g_a26e6806-50dc-4680-a6d4-01f73fce49da/registry-server/0.log" Jan 26 17:28:48 crc kubenswrapper[4856]: I0126 17:28:48.282403 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdp2n_4327b726-2edc-40ad-ac96-b19a7e020048/extract-utilities/0.log" Jan 26 17:28:48 crc kubenswrapper[4856]: I0126 17:28:48.440755 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdp2n_4327b726-2edc-40ad-ac96-b19a7e020048/extract-utilities/0.log" Jan 26 17:28:48 crc kubenswrapper[4856]: I0126 17:28:48.441753 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdp2n_4327b726-2edc-40ad-ac96-b19a7e020048/extract-content/0.log" Jan 26 17:28:48 crc kubenswrapper[4856]: I0126 17:28:48.443663 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdp2n_4327b726-2edc-40ad-ac96-b19a7e020048/extract-content/0.log" Jan 26 17:28:48 crc kubenswrapper[4856]: I0126 17:28:48.569187 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdp2n_4327b726-2edc-40ad-ac96-b19a7e020048/extract-utilities/0.log" Jan 26 17:28:48 crc kubenswrapper[4856]: I0126 17:28:48.627880 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdp2n_4327b726-2edc-40ad-ac96-b19a7e020048/extract-content/0.log" Jan 26 17:28:48 crc kubenswrapper[4856]: I0126 17:28:48.864863 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-tdtfh_566ca894-037a-4b73-95d4-a6246c7c851a/marketplace-operator/0.log" Jan 26 17:28:48 crc kubenswrapper[4856]: I0126 17:28:48.880265 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lfhpz_8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee/extract-utilities/0.log" Jan 26 17:28:49 crc kubenswrapper[4856]: I0126 17:28:49.131191 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdp2n_4327b726-2edc-40ad-ac96-b19a7e020048/registry-server/0.log" Jan 26 17:28:49 crc kubenswrapper[4856]: I0126 17:28:49.169466 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lfhpz_8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee/extract-utilities/0.log" Jan 26 17:28:49 crc kubenswrapper[4856]: I0126 17:28:49.227732 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lfhpz_8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee/extract-content/0.log" Jan 26 17:28:49 crc kubenswrapper[4856]: I0126 17:28:49.257297 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lfhpz_8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee/extract-content/0.log" Jan 26 17:28:49 crc kubenswrapper[4856]: I0126 17:28:49.357469 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lfhpz_8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee/extract-content/0.log" Jan 26 17:28:49 crc kubenswrapper[4856]: I0126 17:28:49.394907 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lfhpz_8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee/extract-utilities/0.log" Jan 26 17:28:49 crc kubenswrapper[4856]: I0126 17:28:49.868941 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lfhpz_8cd36133-7a25-4dae-83a4-bbd0fbf1f2ee/registry-server/0.log" Jan 26 17:28:57 crc kubenswrapper[4856]: I0126 17:28:57.395284 4856 scope.go:117] "RemoveContainer" containerID="b8175a0e79754a858867d9a98f2aa6c52214536db6005b6724cf907eb7a891ee" Jan 26 17:28:57 crc kubenswrapper[4856]: E0126 17:28:57.396275 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xm9cq_openshift-machine-config-operator(63c75ede-5170-4db0-811b-5217ef8d72b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" Jan 26 17:29:01 crc kubenswrapper[4856]: I0126 17:29:01.849557 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-cq2gx_e31d2d53-8992-45e3-98aa-24ea73236248/prometheus-operator/0.log" Jan 26 17:29:01 crc kubenswrapper[4856]: I0126 17:29:01.873773 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-68d9fdc4dd-cf7wn_7c88687f-1304-4709-b148-a196f0d0190d/prometheus-operator-admission-webhook/0.log" Jan 26 17:29:01 crc kubenswrapper[4856]: I0126 17:29:01.893942 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-68d9fdc4dd-jbq25_766f50ba-0751-4f25-a6db-3b7195e72f55/prometheus-operator-admission-webhook/0.log" Jan 26 17:29:02 crc kubenswrapper[4856]: I0126 17:29:02.037720 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-5bmfp_bd7597f2-d44b-4e1b-ac60-b409985e3351/perses-operator/0.log" Jan 26 17:29:02 crc kubenswrapper[4856]: I0126 17:29:02.052984 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-fpn2h_a4ae7646-2afb-4ada-b8a4-d20a69f87949/operator/0.log" Jan 26 17:29:11 crc kubenswrapper[4856]: I0126 17:29:11.395085 4856 scope.go:117] "RemoveContainer" containerID="b8175a0e79754a858867d9a98f2aa6c52214536db6005b6724cf907eb7a891ee" Jan 26 17:29:11 crc kubenswrapper[4856]: E0126 17:29:11.396083 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xm9cq_openshift-machine-config-operator(63c75ede-5170-4db0-811b-5217ef8d72b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" Jan 26 17:29:23 crc kubenswrapper[4856]: I0126 17:29:23.396200 4856 scope.go:117] "RemoveContainer" containerID="b8175a0e79754a858867d9a98f2aa6c52214536db6005b6724cf907eb7a891ee" Jan 26 17:29:23 crc kubenswrapper[4856]: E0126 17:29:23.397588 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xm9cq_openshift-machine-config-operator(63c75ede-5170-4db0-811b-5217ef8d72b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" Jan 26 17:29:38 crc kubenswrapper[4856]: I0126 17:29:38.396196 4856 scope.go:117] "RemoveContainer" containerID="b8175a0e79754a858867d9a98f2aa6c52214536db6005b6724cf907eb7a891ee" Jan 26 17:29:38 crc kubenswrapper[4856]: E0126 17:29:38.397085 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xm9cq_openshift-machine-config-operator(63c75ede-5170-4db0-811b-5217ef8d72b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" Jan 26 17:29:53 crc kubenswrapper[4856]: I0126 17:29:53.395739 4856 scope.go:117] "RemoveContainer" containerID="b8175a0e79754a858867d9a98f2aa6c52214536db6005b6724cf907eb7a891ee" Jan 26 17:29:53 crc kubenswrapper[4856]: E0126 17:29:53.398168 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xm9cq_openshift-machine-config-operator(63c75ede-5170-4db0-811b-5217ef8d72b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" Jan 26 17:30:00 crc kubenswrapper[4856]: I0126 17:30:00.159396 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490810-lg76m"] Jan 26 17:30:00 crc kubenswrapper[4856]: I0126 17:30:00.161093 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-lg76m" Jan 26 17:30:00 crc kubenswrapper[4856]: I0126 17:30:00.164537 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 17:30:00 crc kubenswrapper[4856]: I0126 17:30:00.164522 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 17:30:00 crc kubenswrapper[4856]: I0126 17:30:00.185908 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490810-lg76m"] Jan 26 17:30:00 crc kubenswrapper[4856]: I0126 17:30:00.262647 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnmfh\" (UniqueName: \"kubernetes.io/projected/886317f6-fcda-4393-a343-e9dda96dbc1f-kube-api-access-qnmfh\") pod \"collect-profiles-29490810-lg76m\" (UID: \"886317f6-fcda-4393-a343-e9dda96dbc1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-lg76m" Jan 26 17:30:00 crc kubenswrapper[4856]: I0126 17:30:00.263027 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/886317f6-fcda-4393-a343-e9dda96dbc1f-config-volume\") pod \"collect-profiles-29490810-lg76m\" (UID: \"886317f6-fcda-4393-a343-e9dda96dbc1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-lg76m" Jan 26 17:30:00 crc kubenswrapper[4856]: I0126 17:30:00.263062 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/886317f6-fcda-4393-a343-e9dda96dbc1f-secret-volume\") pod \"collect-profiles-29490810-lg76m\" (UID: \"886317f6-fcda-4393-a343-e9dda96dbc1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-lg76m" Jan 26 17:30:00 crc kubenswrapper[4856]: I0126 17:30:00.365092 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnmfh\" (UniqueName: \"kubernetes.io/projected/886317f6-fcda-4393-a343-e9dda96dbc1f-kube-api-access-qnmfh\") pod \"collect-profiles-29490810-lg76m\" (UID: \"886317f6-fcda-4393-a343-e9dda96dbc1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-lg76m" Jan 26 17:30:00 crc kubenswrapper[4856]: I0126 17:30:00.365670 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/886317f6-fcda-4393-a343-e9dda96dbc1f-config-volume\") pod \"collect-profiles-29490810-lg76m\" (UID: \"886317f6-fcda-4393-a343-e9dda96dbc1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-lg76m" Jan 26 17:30:00 crc kubenswrapper[4856]: I0126 17:30:00.365793 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/886317f6-fcda-4393-a343-e9dda96dbc1f-secret-volume\") pod \"collect-profiles-29490810-lg76m\" (UID: \"886317f6-fcda-4393-a343-e9dda96dbc1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-lg76m" Jan 26 17:30:00 crc kubenswrapper[4856]: I0126 17:30:00.367211 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/886317f6-fcda-4393-a343-e9dda96dbc1f-config-volume\") pod \"collect-profiles-29490810-lg76m\" (UID: \"886317f6-fcda-4393-a343-e9dda96dbc1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-lg76m" Jan 26 17:30:00 crc kubenswrapper[4856]: I0126 17:30:00.374838 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/886317f6-fcda-4393-a343-e9dda96dbc1f-secret-volume\") pod \"collect-profiles-29490810-lg76m\" (UID: \"886317f6-fcda-4393-a343-e9dda96dbc1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-lg76m" Jan 26 17:30:00 crc kubenswrapper[4856]: I0126 17:30:00.386027 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnmfh\" (UniqueName: \"kubernetes.io/projected/886317f6-fcda-4393-a343-e9dda96dbc1f-kube-api-access-qnmfh\") pod \"collect-profiles-29490810-lg76m\" (UID: \"886317f6-fcda-4393-a343-e9dda96dbc1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-lg76m" Jan 26 17:30:00 crc kubenswrapper[4856]: I0126 17:30:00.488644 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-lg76m" Jan 26 17:30:00 crc kubenswrapper[4856]: I0126 17:30:00.529108 4856 generic.go:334] "Generic (PLEG): container finished" podID="0190f6f3-2762-486f-8648-e59c9e5179ad" containerID="1c6e1c2b24da05f9138f08890b3f71dfa17ade72ec99276a5860cbc9c8b26e99" exitCode=0 Jan 26 17:30:00 crc kubenswrapper[4856]: I0126 17:30:00.529232 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pxr8g/must-gather-xhb5b" event={"ID":"0190f6f3-2762-486f-8648-e59c9e5179ad","Type":"ContainerDied","Data":"1c6e1c2b24da05f9138f08890b3f71dfa17ade72ec99276a5860cbc9c8b26e99"} Jan 26 17:30:00 crc kubenswrapper[4856]: I0126 17:30:00.530145 4856 scope.go:117] "RemoveContainer" containerID="1c6e1c2b24da05f9138f08890b3f71dfa17ade72ec99276a5860cbc9c8b26e99" Jan 26 17:30:01 crc kubenswrapper[4856]: I0126 17:30:01.000687 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pxr8g_must-gather-xhb5b_0190f6f3-2762-486f-8648-e59c9e5179ad/gather/0.log" Jan 26 17:30:01 crc kubenswrapper[4856]: I0126 17:30:01.016031 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490810-lg76m"] Jan 26 17:30:01 crc kubenswrapper[4856]: I0126 17:30:01.539383 4856 generic.go:334] "Generic (PLEG): container finished" podID="886317f6-fcda-4393-a343-e9dda96dbc1f" containerID="6d9a89251f6438ddca07d6a74a8d062369c94886db42eccc2d70e510259e9cf6" exitCode=0 Jan 26 17:30:01 crc kubenswrapper[4856]: I0126 17:30:01.539439 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-lg76m" event={"ID":"886317f6-fcda-4393-a343-e9dda96dbc1f","Type":"ContainerDied","Data":"6d9a89251f6438ddca07d6a74a8d062369c94886db42eccc2d70e510259e9cf6"} Jan 26 17:30:01 crc kubenswrapper[4856]: I0126 17:30:01.539501 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-lg76m" event={"ID":"886317f6-fcda-4393-a343-e9dda96dbc1f","Type":"ContainerStarted","Data":"eca76d55256c0050d6dac39b2a19159f5841b8a78d5df6bb7d1baf089d8d380e"} Jan 26 17:30:02 crc kubenswrapper[4856]: I0126 17:30:02.850171 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-lg76m" Jan 26 17:30:03 crc kubenswrapper[4856]: I0126 17:30:03.000242 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/886317f6-fcda-4393-a343-e9dda96dbc1f-secret-volume\") pod \"886317f6-fcda-4393-a343-e9dda96dbc1f\" (UID: \"886317f6-fcda-4393-a343-e9dda96dbc1f\") " Jan 26 17:30:03 crc kubenswrapper[4856]: I0126 17:30:03.000328 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/886317f6-fcda-4393-a343-e9dda96dbc1f-config-volume\") pod \"886317f6-fcda-4393-a343-e9dda96dbc1f\" (UID: \"886317f6-fcda-4393-a343-e9dda96dbc1f\") " Jan 26 17:30:03 crc kubenswrapper[4856]: I0126 17:30:03.000452 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnmfh\" (UniqueName: \"kubernetes.io/projected/886317f6-fcda-4393-a343-e9dda96dbc1f-kube-api-access-qnmfh\") pod \"886317f6-fcda-4393-a343-e9dda96dbc1f\" (UID: \"886317f6-fcda-4393-a343-e9dda96dbc1f\") " Jan 26 17:30:03 crc kubenswrapper[4856]: I0126 17:30:03.001083 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/886317f6-fcda-4393-a343-e9dda96dbc1f-config-volume" (OuterVolumeSpecName: "config-volume") pod "886317f6-fcda-4393-a343-e9dda96dbc1f" (UID: "886317f6-fcda-4393-a343-e9dda96dbc1f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:30:03 crc kubenswrapper[4856]: I0126 17:30:03.005868 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/886317f6-fcda-4393-a343-e9dda96dbc1f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "886317f6-fcda-4393-a343-e9dda96dbc1f" (UID: "886317f6-fcda-4393-a343-e9dda96dbc1f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:30:03 crc kubenswrapper[4856]: I0126 17:30:03.006014 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/886317f6-fcda-4393-a343-e9dda96dbc1f-kube-api-access-qnmfh" (OuterVolumeSpecName: "kube-api-access-qnmfh") pod "886317f6-fcda-4393-a343-e9dda96dbc1f" (UID: "886317f6-fcda-4393-a343-e9dda96dbc1f"). InnerVolumeSpecName "kube-api-access-qnmfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:30:03 crc kubenswrapper[4856]: I0126 17:30:03.102021 4856 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/886317f6-fcda-4393-a343-e9dda96dbc1f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 17:30:03 crc kubenswrapper[4856]: I0126 17:30:03.102071 4856 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/886317f6-fcda-4393-a343-e9dda96dbc1f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 17:30:03 crc kubenswrapper[4856]: I0126 17:30:03.102090 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnmfh\" (UniqueName: \"kubernetes.io/projected/886317f6-fcda-4393-a343-e9dda96dbc1f-kube-api-access-qnmfh\") on node \"crc\" DevicePath \"\"" Jan 26 17:30:03 crc kubenswrapper[4856]: I0126 17:30:03.559758 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-lg76m" event={"ID":"886317f6-fcda-4393-a343-e9dda96dbc1f","Type":"ContainerDied","Data":"eca76d55256c0050d6dac39b2a19159f5841b8a78d5df6bb7d1baf089d8d380e"} Jan 26 17:30:03 crc kubenswrapper[4856]: I0126 17:30:03.559818 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eca76d55256c0050d6dac39b2a19159f5841b8a78d5df6bb7d1baf089d8d380e" Jan 26 17:30:03 crc kubenswrapper[4856]: I0126 17:30:03.559847 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-lg76m" Jan 26 17:30:07 crc kubenswrapper[4856]: I0126 17:30:07.396373 4856 scope.go:117] "RemoveContainer" containerID="b8175a0e79754a858867d9a98f2aa6c52214536db6005b6724cf907eb7a891ee" Jan 26 17:30:07 crc kubenswrapper[4856]: E0126 17:30:07.397275 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xm9cq_openshift-machine-config-operator(63c75ede-5170-4db0-811b-5217ef8d72b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" Jan 26 17:30:07 crc kubenswrapper[4856]: I0126 17:30:07.919701 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pxr8g/must-gather-xhb5b"] Jan 26 17:30:07 crc kubenswrapper[4856]: I0126 17:30:07.920380 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-pxr8g/must-gather-xhb5b" podUID="0190f6f3-2762-486f-8648-e59c9e5179ad" containerName="copy" containerID="cri-o://3f464ae4156c51b0e76930516abce0275686a5ccb1e2b35e0204d8a9e05e513a" gracePeriod=2 Jan 26 17:30:07 crc kubenswrapper[4856]: I0126 17:30:07.925822 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pxr8g/must-gather-xhb5b"] Jan 26 17:30:08 crc kubenswrapper[4856]: I0126 17:30:08.236376 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pxr8g_must-gather-xhb5b_0190f6f3-2762-486f-8648-e59c9e5179ad/copy/0.log" Jan 26 17:30:08 crc kubenswrapper[4856]: I0126 17:30:08.237169 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pxr8g/must-gather-xhb5b" Jan 26 17:30:08 crc kubenswrapper[4856]: I0126 17:30:08.382574 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0190f6f3-2762-486f-8648-e59c9e5179ad-must-gather-output\") pod \"0190f6f3-2762-486f-8648-e59c9e5179ad\" (UID: \"0190f6f3-2762-486f-8648-e59c9e5179ad\") " Jan 26 17:30:08 crc kubenswrapper[4856]: I0126 17:30:08.382640 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vms6d\" (UniqueName: \"kubernetes.io/projected/0190f6f3-2762-486f-8648-e59c9e5179ad-kube-api-access-vms6d\") pod \"0190f6f3-2762-486f-8648-e59c9e5179ad\" (UID: \"0190f6f3-2762-486f-8648-e59c9e5179ad\") " Jan 26 17:30:08 crc kubenswrapper[4856]: I0126 17:30:08.390447 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0190f6f3-2762-486f-8648-e59c9e5179ad-kube-api-access-vms6d" (OuterVolumeSpecName: "kube-api-access-vms6d") pod "0190f6f3-2762-486f-8648-e59c9e5179ad" (UID: "0190f6f3-2762-486f-8648-e59c9e5179ad"). InnerVolumeSpecName "kube-api-access-vms6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:30:08 crc kubenswrapper[4856]: I0126 17:30:08.442354 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0190f6f3-2762-486f-8648-e59c9e5179ad-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "0190f6f3-2762-486f-8648-e59c9e5179ad" (UID: "0190f6f3-2762-486f-8648-e59c9e5179ad"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:30:08 crc kubenswrapper[4856]: I0126 17:30:08.484707 4856 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0190f6f3-2762-486f-8648-e59c9e5179ad-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 26 17:30:08 crc kubenswrapper[4856]: I0126 17:30:08.484781 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vms6d\" (UniqueName: \"kubernetes.io/projected/0190f6f3-2762-486f-8648-e59c9e5179ad-kube-api-access-vms6d\") on node \"crc\" DevicePath \"\"" Jan 26 17:30:08 crc kubenswrapper[4856]: I0126 17:30:08.607679 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pxr8g_must-gather-xhb5b_0190f6f3-2762-486f-8648-e59c9e5179ad/copy/0.log" Jan 26 17:30:08 crc kubenswrapper[4856]: I0126 17:30:08.608463 4856 generic.go:334] "Generic (PLEG): container finished" podID="0190f6f3-2762-486f-8648-e59c9e5179ad" containerID="3f464ae4156c51b0e76930516abce0275686a5ccb1e2b35e0204d8a9e05e513a" exitCode=143 Jan 26 17:30:08 crc kubenswrapper[4856]: I0126 17:30:08.608566 4856 scope.go:117] "RemoveContainer" containerID="3f464ae4156c51b0e76930516abce0275686a5ccb1e2b35e0204d8a9e05e513a" Jan 26 17:30:08 crc kubenswrapper[4856]: I0126 17:30:08.608629 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pxr8g/must-gather-xhb5b" Jan 26 17:30:08 crc kubenswrapper[4856]: I0126 17:30:08.639383 4856 scope.go:117] "RemoveContainer" containerID="1c6e1c2b24da05f9138f08890b3f71dfa17ade72ec99276a5860cbc9c8b26e99" Jan 26 17:30:08 crc kubenswrapper[4856]: I0126 17:30:08.690816 4856 scope.go:117] "RemoveContainer" containerID="3f464ae4156c51b0e76930516abce0275686a5ccb1e2b35e0204d8a9e05e513a" Jan 26 17:30:08 crc kubenswrapper[4856]: E0126 17:30:08.691180 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f464ae4156c51b0e76930516abce0275686a5ccb1e2b35e0204d8a9e05e513a\": container with ID starting with 3f464ae4156c51b0e76930516abce0275686a5ccb1e2b35e0204d8a9e05e513a not found: ID does not exist" containerID="3f464ae4156c51b0e76930516abce0275686a5ccb1e2b35e0204d8a9e05e513a" Jan 26 17:30:08 crc kubenswrapper[4856]: I0126 17:30:08.691224 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f464ae4156c51b0e76930516abce0275686a5ccb1e2b35e0204d8a9e05e513a"} err="failed to get container status \"3f464ae4156c51b0e76930516abce0275686a5ccb1e2b35e0204d8a9e05e513a\": rpc error: code = NotFound desc = could not find container \"3f464ae4156c51b0e76930516abce0275686a5ccb1e2b35e0204d8a9e05e513a\": container with ID starting with 3f464ae4156c51b0e76930516abce0275686a5ccb1e2b35e0204d8a9e05e513a not found: ID does not exist" Jan 26 17:30:08 crc kubenswrapper[4856]: I0126 17:30:08.691245 4856 scope.go:117] "RemoveContainer" containerID="1c6e1c2b24da05f9138f08890b3f71dfa17ade72ec99276a5860cbc9c8b26e99" Jan 26 17:30:08 crc kubenswrapper[4856]: E0126 17:30:08.691454 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c6e1c2b24da05f9138f08890b3f71dfa17ade72ec99276a5860cbc9c8b26e99\": container with ID starting with 1c6e1c2b24da05f9138f08890b3f71dfa17ade72ec99276a5860cbc9c8b26e99 not found: ID does not exist" containerID="1c6e1c2b24da05f9138f08890b3f71dfa17ade72ec99276a5860cbc9c8b26e99" Jan 26 17:30:08 crc kubenswrapper[4856]: I0126 17:30:08.691500 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c6e1c2b24da05f9138f08890b3f71dfa17ade72ec99276a5860cbc9c8b26e99"} err="failed to get container status \"1c6e1c2b24da05f9138f08890b3f71dfa17ade72ec99276a5860cbc9c8b26e99\": rpc error: code = NotFound desc = could not find container \"1c6e1c2b24da05f9138f08890b3f71dfa17ade72ec99276a5860cbc9c8b26e99\": container with ID starting with 1c6e1c2b24da05f9138f08890b3f71dfa17ade72ec99276a5860cbc9c8b26e99 not found: ID does not exist" Jan 26 17:30:09 crc kubenswrapper[4856]: I0126 17:30:09.439316 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0190f6f3-2762-486f-8648-e59c9e5179ad" path="/var/lib/kubelet/pods/0190f6f3-2762-486f-8648-e59c9e5179ad/volumes" Jan 26 17:30:19 crc kubenswrapper[4856]: I0126 17:30:19.401361 4856 scope.go:117] "RemoveContainer" containerID="b8175a0e79754a858867d9a98f2aa6c52214536db6005b6724cf907eb7a891ee" Jan 26 17:30:19 crc kubenswrapper[4856]: E0126 17:30:19.404216 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xm9cq_openshift-machine-config-operator(63c75ede-5170-4db0-811b-5217ef8d72b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" Jan 26 17:30:31 crc kubenswrapper[4856]: I0126 17:30:31.395069 4856 scope.go:117] "RemoveContainer" containerID="b8175a0e79754a858867d9a98f2aa6c52214536db6005b6724cf907eb7a891ee" Jan 26 17:30:31 crc kubenswrapper[4856]: E0126 17:30:31.396040 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xm9cq_openshift-machine-config-operator(63c75ede-5170-4db0-811b-5217ef8d72b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" Jan 26 17:30:43 crc kubenswrapper[4856]: I0126 17:30:43.983692 4856 scope.go:117] "RemoveContainer" containerID="bc8203a7d47ee43722cf1047b5eda9c60a27c27e6ee323351ad4ef557fd2f359" Jan 26 17:30:44 crc kubenswrapper[4856]: I0126 17:30:44.002826 4856 scope.go:117] "RemoveContainer" containerID="ea789d382e8ee7680a387d4c0ac031fe25c48fa1835d8928284e77f51f936f6a" Jan 26 17:30:44 crc kubenswrapper[4856]: I0126 17:30:44.021770 4856 scope.go:117] "RemoveContainer" containerID="26c1821c4f23578ac6a0c1415d6730caff4fddaf6502ee746d86ad133657cf42" Jan 26 17:30:46 crc kubenswrapper[4856]: I0126 17:30:46.395414 4856 scope.go:117] "RemoveContainer" containerID="b8175a0e79754a858867d9a98f2aa6c52214536db6005b6724cf907eb7a891ee" Jan 26 17:30:46 crc kubenswrapper[4856]: E0126 17:30:46.396018 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xm9cq_openshift-machine-config-operator(63c75ede-5170-4db0-811b-5217ef8d72b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" Jan 26 17:30:57 crc kubenswrapper[4856]: I0126 17:30:57.395720 4856 scope.go:117] "RemoveContainer" containerID="b8175a0e79754a858867d9a98f2aa6c52214536db6005b6724cf907eb7a891ee" Jan 26 17:30:57 crc kubenswrapper[4856]: E0126 17:30:57.396460 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xm9cq_openshift-machine-config-operator(63c75ede-5170-4db0-811b-5217ef8d72b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" Jan 26 17:31:10 crc kubenswrapper[4856]: I0126 17:31:10.395075 4856 scope.go:117] "RemoveContainer" containerID="b8175a0e79754a858867d9a98f2aa6c52214536db6005b6724cf907eb7a891ee" Jan 26 17:31:10 crc kubenswrapper[4856]: E0126 17:31:10.396057 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xm9cq_openshift-machine-config-operator(63c75ede-5170-4db0-811b-5217ef8d72b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" Jan 26 17:31:23 crc kubenswrapper[4856]: I0126 17:31:23.395046 4856 scope.go:117] "RemoveContainer" containerID="b8175a0e79754a858867d9a98f2aa6c52214536db6005b6724cf907eb7a891ee" Jan 26 17:31:23 crc kubenswrapper[4856]: E0126 17:31:23.396001 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xm9cq_openshift-machine-config-operator(63c75ede-5170-4db0-811b-5217ef8d72b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" Jan 26 17:31:35 crc kubenswrapper[4856]: I0126 17:31:35.410882 4856 scope.go:117] "RemoveContainer" containerID="b8175a0e79754a858867d9a98f2aa6c52214536db6005b6724cf907eb7a891ee" Jan 26 17:31:35 crc kubenswrapper[4856]: E0126 17:31:35.411598 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xm9cq_openshift-machine-config-operator(63c75ede-5170-4db0-811b-5217ef8d72b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" Jan 26 17:31:50 crc kubenswrapper[4856]: I0126 17:31:50.395356 4856 scope.go:117] "RemoveContainer" containerID="b8175a0e79754a858867d9a98f2aa6c52214536db6005b6724cf907eb7a891ee" Jan 26 17:31:50 crc kubenswrapper[4856]: E0126 17:31:50.396252 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xm9cq_openshift-machine-config-operator(63c75ede-5170-4db0-811b-5217ef8d72b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" podUID="63c75ede-5170-4db0-811b-5217ef8d72b3" Jan 26 17:32:04 crc kubenswrapper[4856]: I0126 17:32:04.395359 4856 scope.go:117] "RemoveContainer" containerID="b8175a0e79754a858867d9a98f2aa6c52214536db6005b6724cf907eb7a891ee" Jan 26 17:32:05 crc kubenswrapper[4856]: I0126 17:32:05.602909 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xm9cq" event={"ID":"63c75ede-5170-4db0-811b-5217ef8d72b3","Type":"ContainerStarted","Data":"d8ad8f8ca78c63277c4977095b5c49dedf5824f809bcd40f7a2a1038751c894c"}